Skip to content

big_bird

mindnlp.transformers.models.big_bird.configuration_big_bird.BigBirdConfig

Bases: PretrainedConfig

This is the configuration class to store the configuration of a [BigBirdModel]. It is used to instantiate an BigBird model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BigBird google/bigbird-roberta-base architecture.

Configuration objects inherit from [PretrainedConfig] and can be used to control the model outputs. Read the documentation from [PretrainedConfig] for more information.

PARAMETER DESCRIPTION
vocab_size

Vocabulary size of the BigBird model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling [BigBirdModel].

TYPE: `int`, *optional*, defaults to 50358 DEFAULT: 50358

hidden_size

Dimension of the encoder layers and the pooler layer.

TYPE: `int`, *optional*, defaults to 768 DEFAULT: 768

num_hidden_layers

Number of hidden layers in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 12 DEFAULT: 12

num_attention_heads

Number of attention heads for each attention layer in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 12 DEFAULT: 12

intermediate_size

Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 3072 DEFAULT: 3072

hidden_act

The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.

TYPE: `str` or `function`, *optional*, defaults to `"gelu_new"` DEFAULT: 'gelu_new'

hidden_dropout_prob

The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

TYPE: `float`, *optional*, defaults to 0.1 DEFAULT: 0.1

attention_probs_dropout_prob

The dropout ratio for the attention probabilities.

TYPE: `float`, *optional*, defaults to 0.1 DEFAULT: 0.1

max_position_embeddings

The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 1024 or 2048 or 4096).

TYPE: `int`, *optional*, defaults to 4096 DEFAULT: 4096

type_vocab_size

The vocabulary size of the token_type_ids passed when calling [BigBirdModel].

TYPE: `int`, *optional*, defaults to 2 DEFAULT: 2

initializer_range

The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

TYPE: `float`, *optional*, defaults to 0.02 DEFAULT: 0.02

layer_norm_eps

The epsilon used by the layer normalization layers.

TYPE: `float`, *optional*, defaults to 1e-12 DEFAULT: 1e-12

is_decoder

Whether the model is used as a decoder or not. If False, the model is used as an encoder.

TYPE: `bool`, *optional*, defaults to `False`

use_cache

Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

classifier_dropout

The dropout ratio for the classification head.

TYPE: `float`, *optional* DEFAULT: None

Example
>>> from transformers import BigBirdConfig, BigBirdModel
...
>>> # Initializing a BigBird google/bigbird-roberta-base style configuration
>>> configuration = BigBirdConfig()
...
>>> # Initializing a model (with random weights) from the google/bigbird-roberta-base style configuration
>>> model = BigBirdModel(configuration)
...
>>> # Accessing the model configuration
>>> configuration = model.config
Source code in mindnlp/transformers/models/big_bird/configuration_big_bird.py
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
class BigBirdConfig(PretrainedConfig):
    r"""
    This is the configuration class to store the configuration of a [`BigBirdModel`]. It is used to instantiate an
    BigBird model according to the specified arguments, defining the model architecture. Instantiating a configuration
    with the defaults will yield a similar configuration to that of the BigBird
    [google/bigbird-roberta-base](https://hf-mirror.com/google/bigbird-roberta-base) architecture.

    Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
    documentation from [`PretrainedConfig`] for more information.


    Args:
        vocab_size (`int`, *optional*, defaults to 50358):
            Vocabulary size of the BigBird model. Defines the number of different tokens that can be represented by the
            `inputs_ids` passed when calling [`BigBirdModel`].
        hidden_size (`int`, *optional*, defaults to 768):
            Dimension of the encoder layers and the pooler layer.
        num_hidden_layers (`int`, *optional*, defaults to 12):
            Number of hidden layers in the Transformer encoder.
        num_attention_heads (`int`, *optional*, defaults to 12):
            Number of attention heads for each attention layer in the Transformer encoder.
        intermediate_size (`int`, *optional*, defaults to 3072):
            Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
        hidden_act (`str` or `function`, *optional*, defaults to `"gelu_new"`):
            The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
            `"relu"`, `"selu"` and `"gelu_new"` are supported.
        hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
            The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
        attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
            The dropout ratio for the attention probabilities.
        max_position_embeddings (`int`, *optional*, defaults to 4096):
            The maximum sequence length that this model might ever be used with. Typically set this to something large
            just in case (e.g., 1024 or 2048 or 4096).
        type_vocab_size (`int`, *optional*, defaults to 2):
            The vocabulary size of the `token_type_ids` passed when calling [`BigBirdModel`].
        initializer_range (`float`, *optional*, defaults to 0.02):
            The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
        layer_norm_eps (`float`, *optional*, defaults to 1e-12):
            The epsilon used by the layer normalization layers.
        is_decoder (`bool`, *optional*, defaults to `False`):
            Whether the model is used as a decoder or not. If `False`, the model is used as an encoder.
        use_cache (`bool`, *optional*, defaults to `True`):
            Whether or not the model should return the last key/values attentions (not used by all models). Only
            relevant if `config.is_decoder=True`.
        attention_type (`str`, *optional*, defaults to `"block_sparse"`)
            Whether to use block sparse attention (with n complexity) as introduced in paper or original attention
            layer (with n^2 complexity). Possible values are `"original_full"` and `"block_sparse"`.
        use_bias (`bool`, *optional*, defaults to `True`)
            Whether to use bias in query, key, value.
        rescale_embeddings (`bool`, *optional*, defaults to `False`)
            Whether to rescale embeddings with (hidden_size ** 0.5).
        block_size (`int`, *optional*, defaults to 64)
            Size of each block. Useful only when `attention_type == "block_sparse"`.
        num_random_blocks (`int`, *optional*, defaults to 3)
            Each query is going to attend these many number of random blocks. Useful only when `attention_type ==
            "block_sparse"`.
        classifier_dropout (`float`, *optional*):
            The dropout ratio for the classification head.

    Example:
        ```python
        >>> from transformers import BigBirdConfig, BigBirdModel
        ...
        >>> # Initializing a BigBird google/bigbird-roberta-base style configuration
        >>> configuration = BigBirdConfig()
        ...
        >>> # Initializing a model (with random weights) from the google/bigbird-roberta-base style configuration
        >>> model = BigBirdModel(configuration)
        ...
        >>> # Accessing the model configuration
        >>> configuration = model.config
        ```
    """
    model_type = "big_bird"

    def __init__(
        self,
        vocab_size=50358,
        hidden_size=768,
        num_hidden_layers=12,
        num_attention_heads=12,
        intermediate_size=3072,
        hidden_act="gelu_new",
        hidden_dropout_prob=0.1,
        attention_probs_dropout_prob=0.1,
        max_position_embeddings=4096,
        type_vocab_size=2,
        initializer_range=0.02,
        layer_norm_eps=1e-12,
        use_cache=True,
        pad_token_id=0,
        bos_token_id=1,
        eos_token_id=2,
        sep_token_id=66,
        attention_type="block_sparse",
        use_bias=True,
        rescale_embeddings=False,
        block_size=64,
        num_random_blocks=3,
        classifier_dropout=None,
        **kwargs,
    ):
        """
        Initializes a new instance of the BigBirdConfig class.

        Args:
            vocab_size (int, optional): The size of the vocabulary. Defaults to 50358.
            hidden_size (int, optional): The size of the hidden layer. Defaults to 768.
            num_hidden_layers (int, optional): The number of hidden layers. Defaults to 12.
            num_attention_heads (int, optional): The number of attention heads. Defaults to 12.
            intermediate_size (int, optional): The size of the intermediate layer in the transformer. Defaults to 3072.
            hidden_act (str, optional): The activation function for the hidden layer. Defaults to 'gelu_new'.
            hidden_dropout_prob (float, optional): The dropout probability for the hidden layer. Defaults to 0.1.
            attention_probs_dropout_prob (float, optional): The dropout probability for the attention probabilities. Defaults to 0.1.
            max_position_embeddings (int, optional): The maximum number of positions for the embeddings. Defaults to 4096.
            type_vocab_size (int, optional): The size of the type vocabulary. Defaults to 2.
            initializer_range (float, optional): The range for the initializer. Defaults to 0.02.
            layer_norm_eps (float, optional): The epsilon value for layer normalization. Defaults to 1e-12.
            use_cache (bool, optional): Whether to use cache in the transformer layers. Defaults to True.
            pad_token_id (int, optional): The token id for padding. Defaults to 0.
            bos_token_id (int, optional): The token id for the beginning of sentence. Defaults to 1.
            eos_token_id (int, optional): The token id for the end of sentence. Defaults to 2.
            sep_token_id (int, optional): The token id for the separator. Defaults to 66.
            attention_type (str, optional): The type of attention mechanism. Defaults to 'block_sparse'.
            use_bias (bool, optional): Whether to use bias in the transformer layers. Defaults to True.
            rescale_embeddings (bool, optional): Whether to rescale the embeddings. Defaults to False.
            block_size (int, optional): The size of each block in block sparse attention. Defaults to 64.
            num_random_blocks (int, optional): The number of random blocks in block sparse attention. Defaults to 3.
            classifier_dropout (float, optional): The dropout probability for the classifier layer. Defaults to None.

        Returns:
            None

        Raises:
            None
        """
        super().__init__(
            pad_token_id=pad_token_id,
            bos_token_id=bos_token_id,
            eos_token_id=eos_token_id,
            sep_token_id=sep_token_id,
            **kwargs,
        )

        self.vocab_size = vocab_size
        self.max_position_embeddings = max_position_embeddings
        self.hidden_size = hidden_size
        self.num_hidden_layers = num_hidden_layers
        self.num_attention_heads = num_attention_heads
        self.intermediate_size = intermediate_size
        self.hidden_act = hidden_act
        self.hidden_dropout_prob = hidden_dropout_prob
        self.attention_probs_dropout_prob = attention_probs_dropout_prob
        self.initializer_range = initializer_range
        self.type_vocab_size = type_vocab_size
        self.layer_norm_eps = layer_norm_eps
        self.use_cache = use_cache

        self.rescale_embeddings = rescale_embeddings
        self.attention_type = attention_type
        self.use_bias = use_bias
        self.block_size = block_size
        self.num_random_blocks = num_random_blocks
        self.classifier_dropout = classifier_dropout

mindnlp.transformers.models.big_bird.configuration_big_bird.BigBirdConfig.__init__(vocab_size=50358, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act='gelu_new', hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=4096, type_vocab_size=2, initializer_range=0.02, layer_norm_eps=1e-12, use_cache=True, pad_token_id=0, bos_token_id=1, eos_token_id=2, sep_token_id=66, attention_type='block_sparse', use_bias=True, rescale_embeddings=False, block_size=64, num_random_blocks=3, classifier_dropout=None, **kwargs)

Initializes a new instance of the BigBirdConfig class.

PARAMETER DESCRIPTION
vocab_size

The size of the vocabulary. Defaults to 50358.

TYPE: int DEFAULT: 50358

hidden_size

The size of the hidden layer. Defaults to 768.

TYPE: int DEFAULT: 768

num_hidden_layers

The number of hidden layers. Defaults to 12.

TYPE: int DEFAULT: 12

num_attention_heads

The number of attention heads. Defaults to 12.

TYPE: int DEFAULT: 12

intermediate_size

The size of the intermediate layer in the transformer. Defaults to 3072.

TYPE: int DEFAULT: 3072

hidden_act

The activation function for the hidden layer. Defaults to 'gelu_new'.

TYPE: str DEFAULT: 'gelu_new'

hidden_dropout_prob

The dropout probability for the hidden layer. Defaults to 0.1.

TYPE: float DEFAULT: 0.1

attention_probs_dropout_prob

The dropout probability for the attention probabilities. Defaults to 0.1.

TYPE: float DEFAULT: 0.1

max_position_embeddings

The maximum number of positions for the embeddings. Defaults to 4096.

TYPE: int DEFAULT: 4096

type_vocab_size

The size of the type vocabulary. Defaults to 2.

TYPE: int DEFAULT: 2

initializer_range

The range for the initializer. Defaults to 0.02.

TYPE: float DEFAULT: 0.02

layer_norm_eps

The epsilon value for layer normalization. Defaults to 1e-12.

TYPE: float DEFAULT: 1e-12

use_cache

Whether to use cache in the transformer layers. Defaults to True.

TYPE: bool DEFAULT: True

pad_token_id

The token id for padding. Defaults to 0.

TYPE: int DEFAULT: 0

bos_token_id

The token id for the beginning of sentence. Defaults to 1.

TYPE: int DEFAULT: 1

eos_token_id

The token id for the end of sentence. Defaults to 2.

TYPE: int DEFAULT: 2

sep_token_id

The token id for the separator. Defaults to 66.

TYPE: int DEFAULT: 66

attention_type

The type of attention mechanism. Defaults to 'block_sparse'.

TYPE: str DEFAULT: 'block_sparse'

use_bias

Whether to use bias in the transformer layers. Defaults to True.

TYPE: bool DEFAULT: True

rescale_embeddings

Whether to rescale the embeddings. Defaults to False.

TYPE: bool DEFAULT: False

block_size

The size of each block in block sparse attention. Defaults to 64.

TYPE: int DEFAULT: 64

num_random_blocks

The number of random blocks in block sparse attention. Defaults to 3.

TYPE: int DEFAULT: 3

classifier_dropout

The dropout probability for the classifier layer. Defaults to None.

TYPE: float DEFAULT: None

RETURNS DESCRIPTION

None

Source code in mindnlp/transformers/models/big_bird/configuration_big_bird.py
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
def __init__(
    self,
    vocab_size=50358,
    hidden_size=768,
    num_hidden_layers=12,
    num_attention_heads=12,
    intermediate_size=3072,
    hidden_act="gelu_new",
    hidden_dropout_prob=0.1,
    attention_probs_dropout_prob=0.1,
    max_position_embeddings=4096,
    type_vocab_size=2,
    initializer_range=0.02,
    layer_norm_eps=1e-12,
    use_cache=True,
    pad_token_id=0,
    bos_token_id=1,
    eos_token_id=2,
    sep_token_id=66,
    attention_type="block_sparse",
    use_bias=True,
    rescale_embeddings=False,
    block_size=64,
    num_random_blocks=3,
    classifier_dropout=None,
    **kwargs,
):
    """
    Initializes a new instance of the BigBirdConfig class.

    Args:
        vocab_size (int, optional): The size of the vocabulary. Defaults to 50358.
        hidden_size (int, optional): The size of the hidden layer. Defaults to 768.
        num_hidden_layers (int, optional): The number of hidden layers. Defaults to 12.
        num_attention_heads (int, optional): The number of attention heads. Defaults to 12.
        intermediate_size (int, optional): The size of the intermediate layer in the transformer. Defaults to 3072.
        hidden_act (str, optional): The activation function for the hidden layer. Defaults to 'gelu_new'.
        hidden_dropout_prob (float, optional): The dropout probability for the hidden layer. Defaults to 0.1.
        attention_probs_dropout_prob (float, optional): The dropout probability for the attention probabilities. Defaults to 0.1.
        max_position_embeddings (int, optional): The maximum number of positions for the embeddings. Defaults to 4096.
        type_vocab_size (int, optional): The size of the type vocabulary. Defaults to 2.
        initializer_range (float, optional): The range for the initializer. Defaults to 0.02.
        layer_norm_eps (float, optional): The epsilon value for layer normalization. Defaults to 1e-12.
        use_cache (bool, optional): Whether to use cache in the transformer layers. Defaults to True.
        pad_token_id (int, optional): The token id for padding. Defaults to 0.
        bos_token_id (int, optional): The token id for the beginning of sentence. Defaults to 1.
        eos_token_id (int, optional): The token id for the end of sentence. Defaults to 2.
        sep_token_id (int, optional): The token id for the separator. Defaults to 66.
        attention_type (str, optional): The type of attention mechanism. Defaults to 'block_sparse'.
        use_bias (bool, optional): Whether to use bias in the transformer layers. Defaults to True.
        rescale_embeddings (bool, optional): Whether to rescale the embeddings. Defaults to False.
        block_size (int, optional): The size of each block in block sparse attention. Defaults to 64.
        num_random_blocks (int, optional): The number of random blocks in block sparse attention. Defaults to 3.
        classifier_dropout (float, optional): The dropout probability for the classifier layer. Defaults to None.

    Returns:
        None

    Raises:
        None
    """
    super().__init__(
        pad_token_id=pad_token_id,
        bos_token_id=bos_token_id,
        eos_token_id=eos_token_id,
        sep_token_id=sep_token_id,
        **kwargs,
    )

    self.vocab_size = vocab_size
    self.max_position_embeddings = max_position_embeddings
    self.hidden_size = hidden_size
    self.num_hidden_layers = num_hidden_layers
    self.num_attention_heads = num_attention_heads
    self.intermediate_size = intermediate_size
    self.hidden_act = hidden_act
    self.hidden_dropout_prob = hidden_dropout_prob
    self.attention_probs_dropout_prob = attention_probs_dropout_prob
    self.initializer_range = initializer_range
    self.type_vocab_size = type_vocab_size
    self.layer_norm_eps = layer_norm_eps
    self.use_cache = use_cache

    self.rescale_embeddings = rescale_embeddings
    self.attention_type = attention_type
    self.use_bias = use_bias
    self.block_size = block_size
    self.num_random_blocks = num_random_blocks
    self.classifier_dropout = classifier_dropout

mindnlp.transformers.models.big_bird.modeling_big_bird.BIG_BIRD_PRETRAINED_MODEL_ARCHIVE_LIST = ['google/bigbird-roberta-base', 'google/bigbird-roberta-large', 'google/bigbird-base-trivia-itc'] module-attribute

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForCausalLM

Bases: BigBirdPreTrainedModel

This class represents a BigBird model for Causal Language Modeling (LM). It is designed for generating text sequences autoregressively, predicting the next token in a sequence given the previous tokens.

The class includes methods for initializing the model, getting and setting output embeddings, forwarding the model with various input parameters, preparing inputs for text generation, and reordering cache during decoding.

ATTRIBUTE DESCRIPTION
bert

BigBirdModel instance representing the core BigBird model.

cls

BigBirdOnlyMLMHead instance for predicting masked tokens in the input sequence.

METHOD DESCRIPTION
__init__

Initializes the BigBirdForCausalLM model with the provided configuration.

get_output_embeddings

Retrieves the output embeddings from the model.

set_output_embeddings

Sets new output embeddings for the model.

prepare_inputs_for_generation

Prepares inputs for text generation, handling past key values and attention mask.

_reorder_cache

Reorders the cache during decoding based on beam index for faster processing.

Note

This class is inherited from BigBirdPreTrainedModel for leveraging pre-trained weights and configurations.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
class BigBirdForCausalLM(BigBirdPreTrainedModel):

    """
    This class represents a BigBird model for Causal Language Modeling (LM).
    It is designed for generating text sequences autoregressively,
    predicting the next token in a sequence given the previous tokens.

    The class includes methods for initializing the model, getting and setting output embeddings,
    forwarding the model with various input parameters, preparing inputs for text generation, and reordering
    cache during decoding.

    Attributes:
        bert: BigBirdModel instance representing the core BigBird model.
        cls: BigBirdOnlyMLMHead instance for predicting masked tokens in the input sequence.

    Methods:
        __init__(self, config): Initializes the BigBirdForCausalLM model with the provided configuration.
        get_output_embeddings(self): Retrieves the output embeddings from the model.
        set_output_embeddings(self, new_embeddings): Sets new output embeddings for the model.
        forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds,
            encoder_hidden_states, encoder_attention_mask, past_key_values, labels, use_cache, output_attentions,
            output_hidden_states, return_dict): Constructs the model for LM generation, taking various input parameters.
        prepare_inputs_for_generation(self, input_ids, past_key_values, attention_mask): Prepares inputs for text generation, handling past key values and attention mask.
        _reorder_cache(self, past_key_values, beam_idx): Reorders the cache during decoding based on beam index for faster processing.

    Note:
        This class is inherited from BigBirdPreTrainedModel for leveraging pre-trained weights and configurations.
    """
    _tied_weights_keys = ["cls.predictions.decoder.weight", "cls.predictions.decoder.bias"]

    def __init__(self, config):
        """
        Initializes an instance of the BigBirdForCausalLM class.

        Args:
            self: The instance of the class.
            config: An instance of the BigBirdConfig class containing the configuration settings for the model.
                It must have the following attributes:

                - is_decoder (bool): Indicates whether the model is used as a decoder. If False, a warning message is logged.

        Returns:
            None.

        Raises:
            None.

        This method initializes the BigBirdForCausalLM instance by calling the superclass's __init__ method with the provided config.
        If the is_decoder attribute in the config is False, a warning message is logged to remind the user to set it to True
        if they want to use BigBirdForCausalLM as a standalone model.
        The method then initializes the bert attribute with an instance of the BigBirdModel class, using the provided config.
        Finally, the cls attribute is initialized with an instance of the BigBirdOnlyMLMHead class, using the provided config.
        """
        super().__init__(config)

        if not config.is_decoder:
            logger.warning("If you want to use `BigBirdForCausalLM` as a standalone, add `is_decoder=True.`")

        self.bert = BigBirdModel(config)
        self.cls = BigBirdOnlyMLMHead(config)

        # Initialize weights and apply final processing
        self.post_init()

    def get_output_embeddings(self):
        """
        Returns the output embeddings for the BigBirdForCausalLM model.

        Args:
            self (BigBirdForCausalLM): The instance of the BigBirdForCausalLM class.

        Returns:
            None.

        Raises:
            None.
        """
        return self.cls.predictions.decoder

    def set_output_embeddings(self, new_embeddings):
        """
        Sets the output embeddings of the BigBirdForCausalLM model.

        Args:
            self (BigBirdForCausalLM): The instance of the BigBirdForCausalLM class.
            new_embeddings: The new embeddings to be set for the output layer. It should be a tensor of shape
                (vocab_size, hidden_size), where vocab_size is the size of the output vocabulary
                and hidden_size is the size of the hidden layers in the model.

        Returns:
            None: This method modifies the output embeddings of the BigBirdForCausalLM model in place.

        Raises:
            None.
        """
        self.cls.predictions.decoder = new_embeddings

    def forward(
        self,
        input_ids: mindspore.Tensor = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        encoder_hidden_states: Optional[mindspore.Tensor] = None,
        encoder_attention_mask: Optional[mindspore.Tensor] = None,
        past_key_values: Optional[Tuple[Tuple[mindspore.Tensor]]] = None,
        labels: Optional[mindspore.Tensor] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[CausalLMOutputWithCrossAttentions, Tuple[mindspore.Tensor]]:
        r"""
        Args:
            encoder_hidden_states  (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
                Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
                the model is configured as a decoder.
            encoder_attention_mask (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
                the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:

                - 1 for tokens that are **not masked**,
                - 0 for tokens that are **masked**.
            past_key_values (`tuple(tuple(mindspore.Tensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
                Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
                If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
                don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
                `decoder_input_ids` of shape `(batch_size, sequence_length)`.
            labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
                `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are
                ignored (masked), the loss is only computed for the tokens with labels n `[0, ..., config.vocab_size]`.
            use_cache (`bool`, *optional*):
                If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
                `past_key_values`).
        """
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        outputs = self.bert(
            input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            encoder_hidden_states=encoder_hidden_states,
            encoder_attention_mask=encoder_attention_mask,
            past_key_values=past_key_values,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        sequence_output = outputs[0]
        prediction_scores = self.cls(sequence_output)

        lm_loss = None
        if labels is not None:
            # we are doing next-token prediction; shift prediction scores and input ids by one
            shifted_prediction_scores = prediction_scores[:, :-1, :]
            labels = labels[:, 1:]
            lm_loss = F.cross_entropy(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))

        if not return_dict:
            output = (prediction_scores,) + outputs[2:]
            return ((lm_loss,) + output) if lm_loss is not None else output

        return CausalLMOutputWithCrossAttentions(
            loss=lm_loss,
            logits=prediction_scores,
            past_key_values=outputs.past_key_values,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
            cross_attentions=outputs.cross_attentions,
        )

    def prepare_inputs_for_generation(self, input_ids, past_key_values=None, attention_mask=None):
        """
        This method prepares inputs for generation in the BigBirdForCausalLM class.

        Args:
            self: The instance of the class.
            input_ids (torch.Tensor): The input tensor containing the token ids.
                Shape should be (batch_size, sequence_length).
            past_key_values (tuple, optional): The tuple of past key values for attention mechanism.
                Default is None.
            attention_mask (torch.Tensor, optional): The attention mask tensor.
                If not provided, it is initialized with ones of the same shape as input_ids.

        Returns:
            dict:
                A dictionary containing the prepared inputs for generation with the following keys:

                - 'input_ids' (torch.Tensor): The input tensor for generation with potentially removed prefix.
                - 'attention_mask' (torch.Tensor): The attention mask tensor.
                - 'past_key_values' (tuple): The past key values for attention mechanism.

        Raises:
            ValueError: If the input_ids shape does not match the expected shape.
            IndexError: If the past_key_values tuple does not have the expected structure.
        """
        input_shape = input_ids.shape

        # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
        if attention_mask is None:
            attention_mask = input_ids.new_ones(input_shape)

        # cut decoder_input_ids if past_key_values is used
        if past_key_values is not None:
            past_length = past_key_values[0][0].shape[2]

            # Some generation methods already pass only the last input ID
            if input_ids.shape[1] > past_length:
                remove_prefix_length = past_length
            else:
                # Default to old behavior: keep only final ID
                remove_prefix_length = input_ids.shape[1] - 1

            input_ids = input_ids[:, remove_prefix_length:]

        return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past_key_values}

    def _reorder_cache(self, past_key_values, beam_idx):
        """
        Reorders the cache for the BigBirdForCausalLM model based on the provided beam index.

        Args:
            self (BigBirdForCausalLM): The instance of the BigBirdForCausalLM class.
            past_key_values (tuple): A tuple containing past key values for the model.
            beam_idx (Tensor): A tensor representing the beam index to reorder the cache.

        Returns:
            None: This method does not return any value but modifies the cache in-place.

        Raises:
            IndexError: If the beam index is out of bounds or invalid.
            ValueError: If the past_key_values are not in the expected format.
        """
        reordered_past = ()
        for layer_past in past_key_values:
            reordered_past += (
                tuple(past_state.index_select(0, beam_idx) for past_state in layer_past[:2])
                + layer_past[2:],
            )
        return reordered_past

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForCausalLM.__init__(config)

Initializes an instance of the BigBirdForCausalLM class.

PARAMETER DESCRIPTION
self

The instance of the class.

config

An instance of the BigBirdConfig class containing the configuration settings for the model. It must have the following attributes:

  • is_decoder (bool): Indicates whether the model is used as a decoder. If False, a warning message is logged.

RETURNS DESCRIPTION

None.

This method initializes the BigBirdForCausalLM instance by calling the superclass's init method with the provided config. If the is_decoder attribute in the config is False, a warning message is logged to remind the user to set it to True if they want to use BigBirdForCausalLM as a standalone model. The method then initializes the bert attribute with an instance of the BigBirdModel class, using the provided config. Finally, the cls attribute is initialized with an instance of the BigBirdOnlyMLMHead class, using the provided config.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
def __init__(self, config):
    """
    Initializes an instance of the BigBirdForCausalLM class.

    Args:
        self: The instance of the class.
        config: An instance of the BigBirdConfig class containing the configuration settings for the model.
            It must have the following attributes:

            - is_decoder (bool): Indicates whether the model is used as a decoder. If False, a warning message is logged.

    Returns:
        None.

    Raises:
        None.

    This method initializes the BigBirdForCausalLM instance by calling the superclass's __init__ method with the provided config.
    If the is_decoder attribute in the config is False, a warning message is logged to remind the user to set it to True
    if they want to use BigBirdForCausalLM as a standalone model.
    The method then initializes the bert attribute with an instance of the BigBirdModel class, using the provided config.
    Finally, the cls attribute is initialized with an instance of the BigBirdOnlyMLMHead class, using the provided config.
    """
    super().__init__(config)

    if not config.is_decoder:
        logger.warning("If you want to use `BigBirdForCausalLM` as a standalone, add `is_decoder=True.`")

    self.bert = BigBirdModel(config)
    self.cls = BigBirdOnlyMLMHead(config)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForCausalLM.forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
encoder_hidden_states

Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

TYPE: (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional* DEFAULT: None

encoder_attention_mask

Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:

  • 1 for tokens that are not masked,
  • 0 for tokens that are masked.

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

past_key_values

Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don't have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).

TYPE: `tuple(tuple(mindspore.Tensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)` DEFAULT: None

labels

Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size].

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

use_cache

If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).

TYPE: `bool`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
def forward(
    self,
    input_ids: mindspore.Tensor = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    encoder_hidden_states: Optional[mindspore.Tensor] = None,
    encoder_attention_mask: Optional[mindspore.Tensor] = None,
    past_key_values: Optional[Tuple[Tuple[mindspore.Tensor]]] = None,
    labels: Optional[mindspore.Tensor] = None,
    use_cache: Optional[bool] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[CausalLMOutputWithCrossAttentions, Tuple[mindspore.Tensor]]:
    r"""
    Args:
        encoder_hidden_states  (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
            Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
            the model is configured as a decoder.
        encoder_attention_mask (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
            the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:

            - 1 for tokens that are **not masked**,
            - 0 for tokens that are **masked**.
        past_key_values (`tuple(tuple(mindspore.Tensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
            Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
            If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
            don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
            `decoder_input_ids` of shape `(batch_size, sequence_length)`.
        labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
            `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are
            ignored (masked), the loss is only computed for the tokens with labels n `[0, ..., config.vocab_size]`.
        use_cache (`bool`, *optional*):
            If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
            `past_key_values`).
    """
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    outputs = self.bert(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids,
        position_ids=position_ids,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        encoder_hidden_states=encoder_hidden_states,
        encoder_attention_mask=encoder_attention_mask,
        past_key_values=past_key_values,
        use_cache=use_cache,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    sequence_output = outputs[0]
    prediction_scores = self.cls(sequence_output)

    lm_loss = None
    if labels is not None:
        # we are doing next-token prediction; shift prediction scores and input ids by one
        shifted_prediction_scores = prediction_scores[:, :-1, :]
        labels = labels[:, 1:]
        lm_loss = F.cross_entropy(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))

    if not return_dict:
        output = (prediction_scores,) + outputs[2:]
        return ((lm_loss,) + output) if lm_loss is not None else output

    return CausalLMOutputWithCrossAttentions(
        loss=lm_loss,
        logits=prediction_scores,
        past_key_values=outputs.past_key_values,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
        cross_attentions=outputs.cross_attentions,
    )

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForCausalLM.get_output_embeddings()

Returns the output embeddings for the BigBirdForCausalLM model.

PARAMETER DESCRIPTION
self

The instance of the BigBirdForCausalLM class.

TYPE: BigBirdForCausalLM

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
def get_output_embeddings(self):
    """
    Returns the output embeddings for the BigBirdForCausalLM model.

    Args:
        self (BigBirdForCausalLM): The instance of the BigBirdForCausalLM class.

    Returns:
        None.

    Raises:
        None.
    """
    return self.cls.predictions.decoder

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForCausalLM.prepare_inputs_for_generation(input_ids, past_key_values=None, attention_mask=None)

This method prepares inputs for generation in the BigBirdForCausalLM class.

PARAMETER DESCRIPTION
self

The instance of the class.

input_ids

The input tensor containing the token ids. Shape should be (batch_size, sequence_length).

TYPE: Tensor

past_key_values

The tuple of past key values for attention mechanism. Default is None.

TYPE: tuple DEFAULT: None

attention_mask

The attention mask tensor. If not provided, it is initialized with ones of the same shape as input_ids.

TYPE: Tensor DEFAULT: None

RETURNS DESCRIPTION
dict

A dictionary containing the prepared inputs for generation with the following keys:

  • 'input_ids' (torch.Tensor): The input tensor for generation with potentially removed prefix.
  • 'attention_mask' (torch.Tensor): The attention mask tensor.
  • 'past_key_values' (tuple): The past key values for attention mechanism.
RAISES DESCRIPTION
ValueError

If the input_ids shape does not match the expected shape.

IndexError

If the past_key_values tuple does not have the expected structure.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
def prepare_inputs_for_generation(self, input_ids, past_key_values=None, attention_mask=None):
    """
    This method prepares inputs for generation in the BigBirdForCausalLM class.

    Args:
        self: The instance of the class.
        input_ids (torch.Tensor): The input tensor containing the token ids.
            Shape should be (batch_size, sequence_length).
        past_key_values (tuple, optional): The tuple of past key values for attention mechanism.
            Default is None.
        attention_mask (torch.Tensor, optional): The attention mask tensor.
            If not provided, it is initialized with ones of the same shape as input_ids.

    Returns:
        dict:
            A dictionary containing the prepared inputs for generation with the following keys:

            - 'input_ids' (torch.Tensor): The input tensor for generation with potentially removed prefix.
            - 'attention_mask' (torch.Tensor): The attention mask tensor.
            - 'past_key_values' (tuple): The past key values for attention mechanism.

    Raises:
        ValueError: If the input_ids shape does not match the expected shape.
        IndexError: If the past_key_values tuple does not have the expected structure.
    """
    input_shape = input_ids.shape

    # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
    if attention_mask is None:
        attention_mask = input_ids.new_ones(input_shape)

    # cut decoder_input_ids if past_key_values is used
    if past_key_values is not None:
        past_length = past_key_values[0][0].shape[2]

        # Some generation methods already pass only the last input ID
        if input_ids.shape[1] > past_length:
            remove_prefix_length = past_length
        else:
            # Default to old behavior: keep only final ID
            remove_prefix_length = input_ids.shape[1] - 1

        input_ids = input_ids[:, remove_prefix_length:]

    return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past_key_values}

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForCausalLM.set_output_embeddings(new_embeddings)

Sets the output embeddings of the BigBirdForCausalLM model.

PARAMETER DESCRIPTION
self

The instance of the BigBirdForCausalLM class.

TYPE: BigBirdForCausalLM

new_embeddings

The new embeddings to be set for the output layer. It should be a tensor of shape (vocab_size, hidden_size), where vocab_size is the size of the output vocabulary and hidden_size is the size of the hidden layers in the model.

RETURNS DESCRIPTION
None

This method modifies the output embeddings of the BigBirdForCausalLM model in place.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
def set_output_embeddings(self, new_embeddings):
    """
    Sets the output embeddings of the BigBirdForCausalLM model.

    Args:
        self (BigBirdForCausalLM): The instance of the BigBirdForCausalLM class.
        new_embeddings: The new embeddings to be set for the output layer. It should be a tensor of shape
            (vocab_size, hidden_size), where vocab_size is the size of the output vocabulary
            and hidden_size is the size of the hidden layers in the model.

    Returns:
        None: This method modifies the output embeddings of the BigBirdForCausalLM model in place.

    Raises:
        None.
    """
    self.cls.predictions.decoder = new_embeddings

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForMaskedLM

Bases: BigBirdPreTrainedModel

BigBirdForMaskedLM includes methods to create a BigBird model for masked language modeling tasks.

This class inherits from BigBirdPreTrainedModel, and provides functionality to initialize the model, get and set the output embeddings, forward the model for masked language modeling, and prepare inputs for generation.

Example
>>> import torch
>>> from transformers import AutoTokenizer, BigBirdForMaskedLM
>>> from datasets import load_dataset
...
>>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
>>> model = BigBirdForMaskedLM.from_pretrained("google/bigbird-roberta-base")
>>> squad_ds = load_dataset("squad_v2", split="train")  # doctest: +IGNORE_RESULT
...
>>> # select random long article
>>> LONG_ARTICLE_TARGET = squad_ds[81514]["context"]
>>> # select random sentence
>>> LONG_ARTICLE_TARGET[332:398]
'the highest values are very close to the theoretical maximum value'
>>> # add mask_token
>>> LONG_ARTICLE_TO_MASK = LONG_ARTICLE_TARGET.replace("maximum", "[MASK]")
>>> inputs = tokenizer(LONG_ARTICLE_TO_MASK, return_tensors="pt")
>>> # long article input
>>> list(inputs["input_ids"].shape)
[1, 919]
>>> with torch.no_grad():
...     logits = model(**inputs).logits
>>> # retrieve index of [MASK]
>>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
>>> predicted_token_id = logits[0, mask_token_index].argmax(dim=-1)
>>> tokenizer.decode(predicted_token_id)
'maximum'
Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
class BigBirdForMaskedLM(BigBirdPreTrainedModel):

    """
    BigBirdForMaskedLM includes methods to create a BigBird model for masked language modeling tasks.

    This class inherits from BigBirdPreTrainedModel, and provides functionality to initialize the model,
    get and set the output embeddings, forward the model for masked language modeling, and prepare inputs
    for generation.

    Example:
        ```python
        >>> import torch
        >>> from transformers import AutoTokenizer, BigBirdForMaskedLM
        >>> from datasets import load_dataset
        ...
        >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
        >>> model = BigBirdForMaskedLM.from_pretrained("google/bigbird-roberta-base")
        >>> squad_ds = load_dataset("squad_v2", split="train")  # doctest: +IGNORE_RESULT
        ...
        >>> # select random long article
        >>> LONG_ARTICLE_TARGET = squad_ds[81514]["context"]
        >>> # select random sentence
        >>> LONG_ARTICLE_TARGET[332:398]
        'the highest values are very close to the theoretical maximum value'
        >>> # add mask_token
        >>> LONG_ARTICLE_TO_MASK = LONG_ARTICLE_TARGET.replace("maximum", "[MASK]")
        >>> inputs = tokenizer(LONG_ARTICLE_TO_MASK, return_tensors="pt")
        >>> # long article input
        >>> list(inputs["input_ids"].shape)
        [1, 919]
        >>> with torch.no_grad():
        ...     logits = model(**inputs).logits
        >>> # retrieve index of [MASK]
        >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
        >>> predicted_token_id = logits[0, mask_token_index].argmax(dim=-1)
        >>> tokenizer.decode(predicted_token_id)
        'maximum'
        ```
    """
    _tied_weights_keys = ["cls.predictions.decoder.weight", "cls.predictions.decoder.bias"]

    def __init__(self, config):
        """
        Initializes an instance of the BigBirdForMaskedLM class.

        Args:
            self: The instance of the class.
            config: An object of the Config class containing the configuration settings for the model.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__(config)

        if config.is_decoder:
            logger.warning(
                "If you want to use `BigBirdForMaskedLM` make sure `config.is_decoder=False` for "
                "bi-directional self-attention."
            )

        self.bert = BigBirdModel(config)
        self.cls = BigBirdOnlyMLMHead(config)

        # Initialize weights and apply final processing
        self.post_init()

    def get_output_embeddings(self):
        """
        Returns the output embeddings for the BigBirdForMaskedLM model.

        Args:
            self (BigBirdForMaskedLM): The instance of the BigBirdForMaskedLM class.

        Returns:
            None.

        Raises:
            None.
        """
        return self.cls.predictions.decoder

    def set_output_embeddings(self, new_embeddings):
        """
        This method sets the output embeddings for the BigBirdForMaskedLM model.

        Args:
            self (object): The instance of the BigBirdForMaskedLM class.
            new_embeddings (object): The new embeddings to be set as the output embeddings for the model.
                It can be of any valid type supported for model embeddings.

        Returns:
            None.

        Raises:
            None.
        """
        self.cls.predictions.decoder = new_embeddings

    def forward(
        self,
        input_ids: mindspore.Tensor = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        encoder_hidden_states: Optional[mindspore.Tensor] = None,
        encoder_attention_mask: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[MaskedLMOutput, Tuple[mindspore.Tensor]]:
        r"""
        Args:
            labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
                config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
                loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.

        Returns:
            Union[MaskedLMOutput, Tuple[mindspore.Tensor]]

        Example:
            ```python
            >>> import torch
            >>> from transformers import AutoTokenizer, BigBirdForMaskedLM
            >>> from datasets import load_dataset
            ...
            >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
            >>> model = BigBirdForMaskedLM.from_pretrained("google/bigbird-roberta-base")
            >>> squad_ds = load_dataset("squad_v2", split="train")  # doctest: +IGNORE_RESULT
            ...
            >>> # select random long article
            >>> LONG_ARTICLE_TARGET = squad_ds[81514]["context"]
            >>> # select random sentence
            >>> LONG_ARTICLE_TARGET[332:398]
            'the highest values are very close to the theoretical maximum value'
            >>> # add mask_token
            >>> LONG_ARTICLE_TO_MASK = LONG_ARTICLE_TARGET.replace("maximum", "[MASK]")
            >>> inputs = tokenizer(LONG_ARTICLE_TO_MASK, return_tensors="pt")
            >>> # long article input
            >>> list(inputs["input_ids"].shape)
            [1, 919]
            >>> with torch.no_grad():
            ...     logits = model(**inputs).logits
            >>> # retrieve index of [MASK]
            >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
            >>> predicted_token_id = logits[0, mask_token_index].argmax(dim=-1)
            >>> tokenizer.decode(predicted_token_id)
            'maximum'
            ```

            ```python
            >>> labels = tokenizer(LONG_ARTICLE_TARGET, return_tensors="pt")["input_ids"]
            >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
            >>> outputs = model(**inputs, labels=labels)
            >>> round(outputs.loss.item(), 2)
            1.99
            ```
        """
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        outputs = self.bert(
            input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            encoder_hidden_states=encoder_hidden_states,
            encoder_attention_mask=encoder_attention_mask,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        sequence_output = outputs[0]
        prediction_scores = self.cls(sequence_output)

        masked_lm_loss = None
        if labels is not None:
            masked_lm_loss = F.cross_entropy(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))

        if not return_dict:
            output = (prediction_scores,) + outputs[2:]
            return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output

        return MaskedLMOutput(
            loss=masked_lm_loss,
            logits=prediction_scores,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

    def prepare_inputs_for_generation(self, input_ids, attention_mask=None):
        """
        Prepares inputs for generation in the BigBirdForMaskedLM model.

        Args:
            self (BigBirdForMaskedLM): The instance of the BigBirdForMaskedLM class.
            input_ids (Tensor): The input tensor of shape (batch_size, sequence_length).
                The tensor represents the input token IDs.
            attention_mask (Tensor, optional): The attention mask tensor of shape (batch_size, sequence_length).
                It masks the padding tokens. Defaults to None.

        Returns:
            dict: A dictionary containing the prepared inputs for generation.
                The dictionary has the following keys:

                - 'input_ids' (Tensor): The input tensor of shape (batch_size, sequence_length + 1).
                It includes an additional dummy token at the end.
                - 'attention_mask' (Tensor): The attention mask tensor of shape (batch_size, sequence_length + 1).
                It includes an additional attention mask for the dummy token.

        Raises:
            ValueError: If the PAD token is not defined for generation.
        """
        input_shape = input_ids.shape
        effective_batch_size = input_shape[0]

        #  add a dummy token
        if self.config.pad_token_id is None:
            raise ValueError("The PAD token should be defined for generation")
        attention_mask = ops.cat([attention_mask, attention_mask.new_zeros((attention_mask.shape[0], 1))], dim=-1)
        dummy_token = ops.full(
            (effective_batch_size, 1), self.config.pad_token_id, dtype=mindspore.int64
        )
        input_ids = ops.cat([input_ids, dummy_token], dim=1)

        return {"input_ids": input_ids, "attention_mask": attention_mask}

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForMaskedLM.__init__(config)

Initializes an instance of the BigBirdForMaskedLM class.

PARAMETER DESCRIPTION
self

The instance of the class.

config

An object of the Config class containing the configuration settings for the model.

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
def __init__(self, config):
    """
    Initializes an instance of the BigBirdForMaskedLM class.

    Args:
        self: The instance of the class.
        config: An object of the Config class containing the configuration settings for the model.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__(config)

    if config.is_decoder:
        logger.warning(
            "If you want to use `BigBirdForMaskedLM` make sure `config.is_decoder=False` for "
            "bi-directional self-attention."
        )

    self.bert = BigBirdModel(config)
    self.cls = BigBirdOnlyMLMHead(config)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForMaskedLM.forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
labels

Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

RETURNS DESCRIPTION
Union[MaskedLMOutput, Tuple[Tensor]]

Union[MaskedLMOutput, Tuple[mindspore.Tensor]]

Example
>>> import torch
>>> from transformers import AutoTokenizer, BigBirdForMaskedLM
>>> from datasets import load_dataset
...
>>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
>>> model = BigBirdForMaskedLM.from_pretrained("google/bigbird-roberta-base")
>>> squad_ds = load_dataset("squad_v2", split="train")  # doctest: +IGNORE_RESULT
...
>>> # select random long article
>>> LONG_ARTICLE_TARGET = squad_ds[81514]["context"]
>>> # select random sentence
>>> LONG_ARTICLE_TARGET[332:398]
'the highest values are very close to the theoretical maximum value'
>>> # add mask_token
>>> LONG_ARTICLE_TO_MASK = LONG_ARTICLE_TARGET.replace("maximum", "[MASK]")
>>> inputs = tokenizer(LONG_ARTICLE_TO_MASK, return_tensors="pt")
>>> # long article input
>>> list(inputs["input_ids"].shape)
[1, 919]
>>> with torch.no_grad():
...     logits = model(**inputs).logits
>>> # retrieve index of [MASK]
>>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
>>> predicted_token_id = logits[0, mask_token_index].argmax(dim=-1)
>>> tokenizer.decode(predicted_token_id)
'maximum'
>>> labels = tokenizer(LONG_ARTICLE_TARGET, return_tensors="pt")["input_ids"]
>>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
>>> outputs = model(**inputs, labels=labels)
>>> round(outputs.loss.item(), 2)
1.99
Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
def forward(
    self,
    input_ids: mindspore.Tensor = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    encoder_hidden_states: Optional[mindspore.Tensor] = None,
    encoder_attention_mask: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[MaskedLMOutput, Tuple[mindspore.Tensor]]:
    r"""
    Args:
        labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
            config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
            loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.

    Returns:
        Union[MaskedLMOutput, Tuple[mindspore.Tensor]]

    Example:
        ```python
        >>> import torch
        >>> from transformers import AutoTokenizer, BigBirdForMaskedLM
        >>> from datasets import load_dataset
        ...
        >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
        >>> model = BigBirdForMaskedLM.from_pretrained("google/bigbird-roberta-base")
        >>> squad_ds = load_dataset("squad_v2", split="train")  # doctest: +IGNORE_RESULT
        ...
        >>> # select random long article
        >>> LONG_ARTICLE_TARGET = squad_ds[81514]["context"]
        >>> # select random sentence
        >>> LONG_ARTICLE_TARGET[332:398]
        'the highest values are very close to the theoretical maximum value'
        >>> # add mask_token
        >>> LONG_ARTICLE_TO_MASK = LONG_ARTICLE_TARGET.replace("maximum", "[MASK]")
        >>> inputs = tokenizer(LONG_ARTICLE_TO_MASK, return_tensors="pt")
        >>> # long article input
        >>> list(inputs["input_ids"].shape)
        [1, 919]
        >>> with torch.no_grad():
        ...     logits = model(**inputs).logits
        >>> # retrieve index of [MASK]
        >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
        >>> predicted_token_id = logits[0, mask_token_index].argmax(dim=-1)
        >>> tokenizer.decode(predicted_token_id)
        'maximum'
        ```

        ```python
        >>> labels = tokenizer(LONG_ARTICLE_TARGET, return_tensors="pt")["input_ids"]
        >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
        >>> outputs = model(**inputs, labels=labels)
        >>> round(outputs.loss.item(), 2)
        1.99
        ```
    """
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    outputs = self.bert(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids,
        position_ids=position_ids,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        encoder_hidden_states=encoder_hidden_states,
        encoder_attention_mask=encoder_attention_mask,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    sequence_output = outputs[0]
    prediction_scores = self.cls(sequence_output)

    masked_lm_loss = None
    if labels is not None:
        masked_lm_loss = F.cross_entropy(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))

    if not return_dict:
        output = (prediction_scores,) + outputs[2:]
        return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output

    return MaskedLMOutput(
        loss=masked_lm_loss,
        logits=prediction_scores,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForMaskedLM.get_output_embeddings()

Returns the output embeddings for the BigBirdForMaskedLM model.

PARAMETER DESCRIPTION
self

The instance of the BigBirdForMaskedLM class.

TYPE: BigBirdForMaskedLM

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
def get_output_embeddings(self):
    """
    Returns the output embeddings for the BigBirdForMaskedLM model.

    Args:
        self (BigBirdForMaskedLM): The instance of the BigBirdForMaskedLM class.

    Returns:
        None.

    Raises:
        None.
    """
    return self.cls.predictions.decoder

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForMaskedLM.prepare_inputs_for_generation(input_ids, attention_mask=None)

Prepares inputs for generation in the BigBirdForMaskedLM model.

PARAMETER DESCRIPTION
self

The instance of the BigBirdForMaskedLM class.

TYPE: BigBirdForMaskedLM

input_ids

The input tensor of shape (batch_size, sequence_length). The tensor represents the input token IDs.

TYPE: Tensor

attention_mask

The attention mask tensor of shape (batch_size, sequence_length). It masks the padding tokens. Defaults to None.

TYPE: Tensor DEFAULT: None

RETURNS DESCRIPTION
dict

A dictionary containing the prepared inputs for generation. The dictionary has the following keys:

  • 'input_ids' (Tensor): The input tensor of shape (batch_size, sequence_length + 1). It includes an additional dummy token at the end.
  • 'attention_mask' (Tensor): The attention mask tensor of shape (batch_size, sequence_length + 1). It includes an additional attention mask for the dummy token.
RAISES DESCRIPTION
ValueError

If the PAD token is not defined for generation.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
def prepare_inputs_for_generation(self, input_ids, attention_mask=None):
    """
    Prepares inputs for generation in the BigBirdForMaskedLM model.

    Args:
        self (BigBirdForMaskedLM): The instance of the BigBirdForMaskedLM class.
        input_ids (Tensor): The input tensor of shape (batch_size, sequence_length).
            The tensor represents the input token IDs.
        attention_mask (Tensor, optional): The attention mask tensor of shape (batch_size, sequence_length).
            It masks the padding tokens. Defaults to None.

    Returns:
        dict: A dictionary containing the prepared inputs for generation.
            The dictionary has the following keys:

            - 'input_ids' (Tensor): The input tensor of shape (batch_size, sequence_length + 1).
            It includes an additional dummy token at the end.
            - 'attention_mask' (Tensor): The attention mask tensor of shape (batch_size, sequence_length + 1).
            It includes an additional attention mask for the dummy token.

    Raises:
        ValueError: If the PAD token is not defined for generation.
    """
    input_shape = input_ids.shape
    effective_batch_size = input_shape[0]

    #  add a dummy token
    if self.config.pad_token_id is None:
        raise ValueError("The PAD token should be defined for generation")
    attention_mask = ops.cat([attention_mask, attention_mask.new_zeros((attention_mask.shape[0], 1))], dim=-1)
    dummy_token = ops.full(
        (effective_batch_size, 1), self.config.pad_token_id, dtype=mindspore.int64
    )
    input_ids = ops.cat([input_ids, dummy_token], dim=1)

    return {"input_ids": input_ids, "attention_mask": attention_mask}

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForMaskedLM.set_output_embeddings(new_embeddings)

This method sets the output embeddings for the BigBirdForMaskedLM model.

PARAMETER DESCRIPTION
self

The instance of the BigBirdForMaskedLM class.

TYPE: object

new_embeddings

The new embeddings to be set as the output embeddings for the model. It can be of any valid type supported for model embeddings.

TYPE: object

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
def set_output_embeddings(self, new_embeddings):
    """
    This method sets the output embeddings for the BigBirdForMaskedLM model.

    Args:
        self (object): The instance of the BigBirdForMaskedLM class.
        new_embeddings (object): The new embeddings to be set as the output embeddings for the model.
            It can be of any valid type supported for model embeddings.

    Returns:
        None.

    Raises:
        None.
    """
    self.cls.predictions.decoder = new_embeddings

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForMultipleChoice

Bases: BigBirdPreTrainedModel

BigBirdForMultipleChoice is a class for multiple choice question answering using the BigBird model. It inherits from BigBirdPreTrainedModel and provides methods to forward the model for multiple choice tasks.

ATTRIBUTE DESCRIPTION
bert

The BigBird model used for processing input sequences.

TYPE: BigBirdModel

dropout

Dropout layer for regularization.

TYPE: Dropout

classifier

Dense layer for classification.

TYPE: Linear

METHOD DESCRIPTION
__init__

Initializes the BigBirdForMultipleChoice class with the given configuration.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3977
3978
3979
3980
3981
3982
3983
3984
3985
3986
3987
3988
3989
3990
3991
3992
3993
3994
3995
3996
3997
3998
3999
4000
4001
4002
4003
4004
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
4016
4017
4018
4019
4020
4021
4022
4023
4024
4025
4026
4027
4028
4029
4030
4031
4032
4033
4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
4058
4059
4060
4061
4062
4063
4064
4065
4066
4067
4068
4069
4070
4071
4072
4073
4074
4075
4076
4077
4078
4079
4080
4081
class BigBirdForMultipleChoice(BigBirdPreTrainedModel):

    """
    BigBirdForMultipleChoice is a class for multiple choice question answering using the BigBird model.
    It inherits from BigBirdPreTrainedModel and provides methods to forward the model for multiple choice tasks.

    Attributes:
        bert (BigBirdModel): The BigBird model used for processing input sequences.
        dropout (nn.Dropout): Dropout layer for regularization.
        classifier (nn.Linear): Dense layer for classification.

    Methods:
        __init__(config): Initializes the BigBirdForMultipleChoice class with the given configuration.
        forward(input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels,
            output_attentions, output_hidden_states, return_dict): Constructs the model for multiple choice tasks.
    """
    def __init__(self, config):
        """
        Initializes an instance of the BigBirdForMultipleChoice class.

        Args:
            self: The instance of the class.
            config: An object containing configuration settings for the BigBirdModel.

        Returns:
            None.

        Raises:
            NotImplementedError: If the method 'post_init' is not implemented in the derived class.
            TypeError: If the 'config' parameter is not of the expected type.
        """
        super().__init__(config)

        self.bert = BigBirdModel(config)
        self.dropout = nn.Dropout(p=config.hidden_dropout_prob)
        self.classifier = nn.Linear(config.hidden_size, 1)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: mindspore.Tensor = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[MultipleChoiceModelOutput, Tuple[mindspore.Tensor]]:
        r"""
        Args:
            labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
                Labels for computing the multiple choice classification loss. Indices should be in `[0, ...,
                num_choices-1]` where `num_choices` is the size of the second dimension of the input tensors. (See
                `input_ids` above)
        """
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]

        input_ids = input_ids.view(-1, input_ids.shape[-1]) if input_ids is not None else None
        attention_mask = attention_mask.view(-1, attention_mask.shape[-1]) if attention_mask is not None else None
        token_type_ids = token_type_ids.view(-1, token_type_ids.shape[-1]) if token_type_ids is not None else None
        position_ids = position_ids.view(-1, position_ids.shape[-1]) if position_ids is not None else None
        inputs_embeds = (
            inputs_embeds.view(-1, inputs_embeds.shape[-2], inputs_embeds.shape[-1])
            if inputs_embeds is not None
            else None
        )

        outputs = self.bert(
            input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        pooled_output = outputs[1]

        pooled_output = self.dropout(pooled_output)
        logits = self.classifier(pooled_output)
        reshaped_logits = logits.view(-1, num_choices)

        loss = None
        if labels is not None:
            loss = F.cross_entropy(reshaped_logits, labels)

        if not return_dict:
            output = (reshaped_logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return MultipleChoiceModelOutput(
            loss=loss,
            logits=reshaped_logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForMultipleChoice.__init__(config)

Initializes an instance of the BigBirdForMultipleChoice class.

PARAMETER DESCRIPTION
self

The instance of the class.

config

An object containing configuration settings for the BigBirdModel.

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
NotImplementedError

If the method 'post_init' is not implemented in the derived class.

TypeError

If the 'config' parameter is not of the expected type.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3993
3994
3995
3996
3997
3998
3999
4000
4001
4002
4003
4004
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
def __init__(self, config):
    """
    Initializes an instance of the BigBirdForMultipleChoice class.

    Args:
        self: The instance of the class.
        config: An object containing configuration settings for the BigBirdModel.

    Returns:
        None.

    Raises:
        NotImplementedError: If the method 'post_init' is not implemented in the derived class.
        TypeError: If the 'config' parameter is not of the expected type.
    """
    super().__init__(config)

    self.bert = BigBirdModel(config)
    self.dropout = nn.Dropout(p=config.hidden_dropout_prob)
    self.classifier = nn.Linear(config.hidden_size, 1)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForMultipleChoice.forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
labels

Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above)

TYPE: `mindspore.Tensor` of shape `(batch_size,)`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
4017
4018
4019
4020
4021
4022
4023
4024
4025
4026
4027
4028
4029
4030
4031
4032
4033
4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
4058
4059
4060
4061
4062
4063
4064
4065
4066
4067
4068
4069
4070
4071
4072
4073
4074
4075
4076
4077
4078
4079
4080
4081
def forward(
    self,
    input_ids: mindspore.Tensor = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[MultipleChoiceModelOutput, Tuple[mindspore.Tensor]]:
    r"""
    Args:
        labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for computing the multiple choice classification loss. Indices should be in `[0, ...,
            num_choices-1]` where `num_choices` is the size of the second dimension of the input tensors. (See
            `input_ids` above)
    """
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict
    num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]

    input_ids = input_ids.view(-1, input_ids.shape[-1]) if input_ids is not None else None
    attention_mask = attention_mask.view(-1, attention_mask.shape[-1]) if attention_mask is not None else None
    token_type_ids = token_type_ids.view(-1, token_type_ids.shape[-1]) if token_type_ids is not None else None
    position_ids = position_ids.view(-1, position_ids.shape[-1]) if position_ids is not None else None
    inputs_embeds = (
        inputs_embeds.view(-1, inputs_embeds.shape[-2], inputs_embeds.shape[-1])
        if inputs_embeds is not None
        else None
    )

    outputs = self.bert(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids,
        position_ids=position_ids,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    pooled_output = outputs[1]

    pooled_output = self.dropout(pooled_output)
    logits = self.classifier(pooled_output)
    reshaped_logits = logits.view(-1, num_choices)

    loss = None
    if labels is not None:
        loss = F.cross_entropy(reshaped_logits, labels)

    if not return_dict:
        output = (reshaped_logits,) + outputs[2:]
        return ((loss,) + output) if loss is not None else output

    return MultipleChoiceModelOutput(
        loss=loss,
        logits=reshaped_logits,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForPreTraining

Bases: BigBirdPreTrainedModel

This class represents a BigBird model for pre-training tasks, inheriting from BigBirdPreTrainedModel. It includes methods for initialization, getting and setting output embeddings, and forwarding the model for pre-training tasks. The forwardor initializes the model with the provided configuration, sets up the BigBird model and pre-training heads, and executes post-initialization steps. Methods are provided for retrieving and updating the output embeddings. The 'forward' method takes various input parameters for forwarding the model, computes the masked language modeling loss and next sequence prediction loss if labels are provided, and returns the pre-training outputs. An example usage is provided in the docstring.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
class BigBirdForPreTraining(BigBirdPreTrainedModel):

    """
    This class represents a BigBird model for pre-training tasks, inheriting from BigBirdPreTrainedModel.
    It includes methods for initialization, getting and setting output embeddings, and forwarding the
    model for pre-training tasks. The forwardor initializes the model with the provided configuration,
    sets up the BigBird model and pre-training heads, and executes post-initialization steps.
    Methods are provided for retrieving and updating the output embeddings.
    The 'forward' method takes various input parameters for forwarding the model, computes
    the masked language modeling loss and next sequence
    prediction loss if labels are provided, and returns the pre-training outputs.
    An example usage is provided in the docstring.
    """
    _tied_weights_keys = ["cls.predictions.decoder.weight", "cls.predictions.decoder.bias"]

    def __init__(self, config):
        """
        Initializes an instance of the BigBirdForPreTraining class.

        Args:
            self: The instance of the class.
            config: An object of type 'Config' that contains the configuration parameters for the model.
                It should be an instance of the BigBirdConfig class and must contain the following attributes:

                - add_pooling_layer (bool): Whether to add a pooling layer to the model. Default is True.

        Returns:
            None

        Raises:
            None
        """
        super().__init__(config)

        self.bert = BigBirdModel(config, add_pooling_layer=True)
        self.cls = BigBirdPreTrainingHeads(config)

        # Initialize weights and apply final processing
        self.post_init()

    def get_output_embeddings(self):
        """
        This method returns the output embeddings for the BigBirdForPreTraining model.

        Args:
            self: An instance of the BigBirdForPreTraining class.

        Returns:
            None: The method returns the output embeddings for the model.

        Raises:
            This method does not raise any exceptions.
        """
        return self.cls.predictions.decoder

    def set_output_embeddings(self, new_embeddings):
        """
        Sets the output embeddings of the BigBirdForPreTraining model.

        Args:
            self (BigBirdForPreTraining): The instance of the BigBirdForPreTraining class.
            new_embeddings (Any): The new embeddings to be set as the output embeddings.
                This can be a tensor or any object that can be assigned to the output embeddings attribute.

        Returns:
            None.

        Raises:
            None.

        Note:
            This method allows the user to set the output embeddings of the BigBirdForPreTraining model.
            The output embeddings are assigned to the `predictions.decoder` attribute of the model's `cls` object.
            By setting new embeddings, the user can customize or update the output embeddings used in the model's predictions.
        """
        self.cls.predictions.decoder = new_embeddings

    def forward(
        self,
        input_ids: mindspore.Tensor = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        next_sentence_label: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[BigBirdForPreTrainingOutput, Tuple[mindspore.Tensor]]:
        r"""
        Args:
            labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
                config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked),
                the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`

            next_sentence_label (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
                Labels for computing the next sequence prediction (classification) loss. If specified, nsp loss will be
                added to masked_lm loss. Input should be a sequence pair (see `input_ids` docstring) Indices should be in
                `[0, 1]`:

                - 0 indicates sequence B is a continuation of sequence A,
                - 1 indicates sequence B is a random sequence.

            kwargs (`Dict[str, any]`, optional, defaults to *{}*):
                Used to hide legacy arguments that have been deprecated.

        Returns:
            Union[BigBirdForPreTrainingOutput, Tuple[mindspore.Tensor]]

        Example:
            ```python
            >>> from transformers import AutoTokenizer, BigBirdForPreTraining
            >>> import torch
            ...
            >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
            >>> model = BigBirdForPreTraining.from_pretrained("google/bigbird-roberta-base")
            ...
            >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
            >>> outputs = model(**inputs)
            ...
            >>> prediction_logits = outputs.prediction_logits
            >>> seq_relationship_logits = outputs.seq_relationship_logits
            ```
        """
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        outputs = self.bert(
            input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        sequence_output, pooled_output = outputs[:2]
        prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output)

        total_loss = None
        if labels is not None:
            total_loss = F.cross_entropy(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))

        if next_sentence_label is not None and total_loss is not None:
            next_sentence_loss = F.cross_entropy(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1))
            total_loss = total_loss + next_sentence_loss

        if not return_dict:
            output = (prediction_scores, seq_relationship_score) + outputs[2:]
            return ((total_loss,) + output) if total_loss is not None else output

        return BigBirdForPreTrainingOutput(
            loss=total_loss,
            prediction_logits=prediction_scores,
            seq_relationship_logits=seq_relationship_score,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForPreTraining.__init__(config)

Initializes an instance of the BigBirdForPreTraining class.

PARAMETER DESCRIPTION
self

The instance of the class.

config

An object of type 'Config' that contains the configuration parameters for the model. It should be an instance of the BigBirdConfig class and must contain the following attributes:

  • add_pooling_layer (bool): Whether to add a pooling layer to the model. Default is True.

RETURNS DESCRIPTION

None

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
def __init__(self, config):
    """
    Initializes an instance of the BigBirdForPreTraining class.

    Args:
        self: The instance of the class.
        config: An object of type 'Config' that contains the configuration parameters for the model.
            It should be an instance of the BigBirdConfig class and must contain the following attributes:

            - add_pooling_layer (bool): Whether to add a pooling layer to the model. Default is True.

    Returns:
        None

    Raises:
        None
    """
    super().__init__(config)

    self.bert = BigBirdModel(config, add_pooling_layer=True)
    self.cls = BigBirdPreTrainingHeads(config)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForPreTraining.forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, next_sentence_label=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
labels

Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

next_sentence_label

Labels for computing the next sequence prediction (classification) loss. If specified, nsp loss will be added to masked_lm loss. Input should be a sequence pair (see input_ids docstring) Indices should be in [0, 1]:

  • 0 indicates sequence B is a continuation of sequence A,
  • 1 indicates sequence B is a random sequence.

TYPE: `mindspore.Tensor` of shape `(batch_size,)`, *optional* DEFAULT: None

kwargs

Used to hide legacy arguments that have been deprecated.

TYPE: `Dict[str, any]`, optional, defaults to *{}*

RETURNS DESCRIPTION
Union[BigBirdForPreTrainingOutput, Tuple[Tensor]]

Union[BigBirdForPreTrainingOutput, Tuple[mindspore.Tensor]]

Example
>>> from transformers import AutoTokenizer, BigBirdForPreTraining
>>> import torch
...
>>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
>>> model = BigBirdForPreTraining.from_pretrained("google/bigbird-roberta-base")
...
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
...
>>> prediction_logits = outputs.prediction_logits
>>> seq_relationship_logits = outputs.seq_relationship_logits
Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
def forward(
    self,
    input_ids: mindspore.Tensor = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    next_sentence_label: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[BigBirdForPreTrainingOutput, Tuple[mindspore.Tensor]]:
    r"""
    Args:
        labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
            config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked),
            the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`

        next_sentence_label (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for computing the next sequence prediction (classification) loss. If specified, nsp loss will be
            added to masked_lm loss. Input should be a sequence pair (see `input_ids` docstring) Indices should be in
            `[0, 1]`:

            - 0 indicates sequence B is a continuation of sequence A,
            - 1 indicates sequence B is a random sequence.

        kwargs (`Dict[str, any]`, optional, defaults to *{}*):
            Used to hide legacy arguments that have been deprecated.

    Returns:
        Union[BigBirdForPreTrainingOutput, Tuple[mindspore.Tensor]]

    Example:
        ```python
        >>> from transformers import AutoTokenizer, BigBirdForPreTraining
        >>> import torch
        ...
        >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
        >>> model = BigBirdForPreTraining.from_pretrained("google/bigbird-roberta-base")
        ...
        >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
        >>> outputs = model(**inputs)
        ...
        >>> prediction_logits = outputs.prediction_logits
        >>> seq_relationship_logits = outputs.seq_relationship_logits
        ```
    """
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    outputs = self.bert(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids,
        position_ids=position_ids,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    sequence_output, pooled_output = outputs[:2]
    prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output)

    total_loss = None
    if labels is not None:
        total_loss = F.cross_entropy(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))

    if next_sentence_label is not None and total_loss is not None:
        next_sentence_loss = F.cross_entropy(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1))
        total_loss = total_loss + next_sentence_loss

    if not return_dict:
        output = (prediction_scores, seq_relationship_score) + outputs[2:]
        return ((total_loss,) + output) if total_loss is not None else output

    return BigBirdForPreTrainingOutput(
        loss=total_loss,
        prediction_logits=prediction_scores,
        seq_relationship_logits=seq_relationship_score,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForPreTraining.get_output_embeddings()

This method returns the output embeddings for the BigBirdForPreTraining model.

PARAMETER DESCRIPTION
self

An instance of the BigBirdForPreTraining class.

RETURNS DESCRIPTION
None

The method returns the output embeddings for the model.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
def get_output_embeddings(self):
    """
    This method returns the output embeddings for the BigBirdForPreTraining model.

    Args:
        self: An instance of the BigBirdForPreTraining class.

    Returns:
        None: The method returns the output embeddings for the model.

    Raises:
        This method does not raise any exceptions.
    """
    return self.cls.predictions.decoder

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForPreTraining.set_output_embeddings(new_embeddings)

Sets the output embeddings of the BigBirdForPreTraining model.

PARAMETER DESCRIPTION
self

The instance of the BigBirdForPreTraining class.

TYPE: BigBirdForPreTraining

new_embeddings

The new embeddings to be set as the output embeddings. This can be a tensor or any object that can be assigned to the output embeddings attribute.

TYPE: Any

RETURNS DESCRIPTION

None.

Note

This method allows the user to set the output embeddings of the BigBirdForPreTraining model. The output embeddings are assigned to the predictions.decoder attribute of the model's cls object. By setting new embeddings, the user can customize or update the output embeddings used in the model's predictions.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
def set_output_embeddings(self, new_embeddings):
    """
    Sets the output embeddings of the BigBirdForPreTraining model.

    Args:
        self (BigBirdForPreTraining): The instance of the BigBirdForPreTraining class.
        new_embeddings (Any): The new embeddings to be set as the output embeddings.
            This can be a tensor or any object that can be assigned to the output embeddings attribute.

    Returns:
        None.

    Raises:
        None.

    Note:
        This method allows the user to set the output embeddings of the BigBirdForPreTraining model.
        The output embeddings are assigned to the `predictions.decoder` attribute of the model's `cls` object.
        By setting new embeddings, the user can customize or update the output embeddings used in the model's predictions.
    """
    self.cls.predictions.decoder = new_embeddings

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForQuestionAnswering

Bases: BigBirdPreTrainedModel

The BigBirdForQuestionAnswering class represents a model for question answering using the BigBird architecture. It is a subclass of BigBirdPreTrainedModel and provides methods for training, evaluating, and predicting question answering tasks.

ATTRIBUTE DESCRIPTION
`config`

An instance of BigBirdConfig that holds the model configuration.

`num_labels`

The number of labels for the question answering task.

`sep_token_id`

The token ID for the separator token in the input.

`bert`

The BigBirdModel instance that serves as the base model.

`qa_classifier`

The BigBirdForQuestionAnsweringHead instance that performs question answering classification.

METHOD DESCRIPTION
`__init__

Initializes the BigBirdForQuestionAnswering instance.

`prepare_question_mask

Prepares a question mask for question answering.

Example
>>> import torch
>>> from transformers import AutoTokenizer, BigBirdForQuestionAnswering
>>> from datasets import load_dataset
...
>>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
>>> model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-roberta-base")
>>> squad_ds = load_dataset("squad_v2", split="train")
...
>>> # select random article and question
>>> LONG_ARTICLE = squad_ds[81514]["context"]
>>> QUESTION = squad_ds[81514]["question"]
>>> inputs = tokenizer(QUESTION, LONG_ARTICLE, return_tensors="pt")
...
>>> with torch.no_grad():
>>>     outputs = model(**inputs)
...
>>> answer_start_index = outputs.start_logits.argmax()
>>> answer_end_index = outputs.end_logits.argmax()
>>> predict_answer_token_ids = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> predict_answer_token = tokenizer.decode(predict_answer_token_ids)
...
>>> target_start_index, target_end_index = mindspore.tensor([130]), mindspore.tensor([132])
>>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
>>> loss = outputs.loss
Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
4234
4235
4236
4237
4238
4239
4240
4241
4242
4243
4244
4245
4246
4247
4248
4249
4250
4251
4252
4253
4254
4255
4256
4257
4258
4259
4260
4261
4262
4263
4264
4265
4266
4267
4268
4269
4270
4271
4272
4273
4274
4275
4276
4277
4278
4279
4280
4281
4282
4283
4284
4285
4286
4287
4288
4289
4290
4291
4292
4293
4294
4295
4296
4297
4298
4299
4300
4301
4302
4303
4304
4305
4306
4307
4308
4309
4310
4311
4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
4325
4326
4327
4328
4329
4330
4331
4332
4333
4334
4335
4336
4337
4338
4339
4340
4341
4342
4343
4344
4345
4346
4347
4348
4349
4350
4351
4352
4353
4354
4355
4356
4357
4358
4359
4360
4361
4362
4363
4364
4365
4366
4367
4368
4369
4370
4371
4372
4373
4374
4375
4376
4377
4378
4379
4380
4381
4382
4383
4384
4385
4386
4387
4388
4389
4390
4391
4392
4393
4394
4395
4396
4397
4398
4399
4400
4401
4402
4403
4404
4405
4406
4407
4408
4409
4410
4411
4412
4413
4414
4415
4416
4417
4418
4419
4420
4421
4422
4423
4424
4425
4426
4427
4428
4429
4430
4431
4432
4433
4434
4435
4436
4437
4438
4439
4440
4441
4442
4443
4444
4445
4446
4447
4448
4449
4450
4451
4452
4453
4454
4455
4456
4457
4458
4459
4460
4461
4462
4463
4464
4465
4466
4467
4468
4469
4470
4471
4472
4473
class BigBirdForQuestionAnswering(BigBirdPreTrainedModel):

    """
    The `BigBirdForQuestionAnswering` class represents a model for question answering using the BigBird architecture.
    It is a subclass of `BigBirdPreTrainedModel` and provides methods for training, evaluating,
    and predicting question answering tasks.

    Attributes:
        `config`: An instance of `BigBirdConfig` that holds the model configuration.
        `num_labels`: The number of labels for the question answering task.
        `sep_token_id`: The token ID for the separator token in the input.
        `bert`: The BigBirdModel instance that serves as the base model.
        `qa_classifier`: The BigBirdForQuestionAnsweringHead instance that performs question answering classification.

    Methods:
        `__init__(self, config, add_pooling_layer=False)`: Initializes the `BigBirdForQuestionAnswering` instance.
        `forward(self, input_ids, attention_mask, question_lengths, token_type_ids, position_ids, head_mask,
            inputs_embeds, start_positions, end_positions, output_attentions, output_hidden_states, return_dict)`:
            Constructs the model for question answering.
        `prepare_question_mask(q_lengths, maxlen)`: Prepares a question mask for question answering.

    Example:
        ```python
        >>> import torch
        >>> from transformers import AutoTokenizer, BigBirdForQuestionAnswering
        >>> from datasets import load_dataset
        ...
        >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
        >>> model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-roberta-base")
        >>> squad_ds = load_dataset("squad_v2", split="train")
        ...
        >>> # select random article and question
        >>> LONG_ARTICLE = squad_ds[81514]["context"]
        >>> QUESTION = squad_ds[81514]["question"]
        >>> inputs = tokenizer(QUESTION, LONG_ARTICLE, return_tensors="pt")
        ...
        >>> with torch.no_grad():
        >>>     outputs = model(**inputs)
        ...
        >>> answer_start_index = outputs.start_logits.argmax()
        >>> answer_end_index = outputs.end_logits.argmax()
        >>> predict_answer_token_ids = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
        >>> predict_answer_token = tokenizer.decode(predict_answer_token_ids)
        ...
        >>> target_start_index, target_end_index = mindspore.tensor([130]), mindspore.tensor([132])
        >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
        >>> loss = outputs.loss
        ```
    """
    def __init__(self, config, add_pooling_layer=False):
        '''
        __init__

        Initializes an instance of the BigBirdForQuestionAnswering class.

        Args:
            self (object): The instance of the class.
            config (object): The configuration object containing the model configuration.
            add_pooling_layer (bool, optional): A boolean indicating whether to add a pooling layer. Defaults to False.

        Returns:
            None.

        Raises:
            None
        '''
        super().__init__(config)

        config.num_labels = 2
        self.num_labels = config.num_labels
        self.sep_token_id = config.sep_token_id

        self.bert = BigBirdModel(config, add_pooling_layer=add_pooling_layer)
        self.qa_classifier = BigBirdForQuestionAnsweringHead(config)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        question_lengths: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        start_positions: Optional[mindspore.Tensor] = None,
        end_positions: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[BigBirdForQuestionAnsweringModelOutput, Tuple[mindspore.Tensor]]:
        r"""
        Args:
            start_positions (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
                Labels for position (index) of the start of the labelled span for computing the token classification loss.
                Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
                are not taken into account for computing the loss.
            end_positions (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
                Labels for position (index) of the end of the labelled span for computing the token classification loss.
                Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
                are not taken into account for computing the loss.

        Returns:
            Union[BigBirdForQuestionAnsweringModelOutput, Tuple[mindspore.Tensor]]

        Example:
            ```python
            >>> import torch
            >>> from transformers import AutoTokenizer, BigBirdForQuestionAnswering
            >>> from datasets import load_dataset
            ...
            >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
            >>> model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-roberta-base")
            >>> squad_ds = load_dataset("squad_v2", split="train")  # doctest: +IGNORE_RESULT
            ...
            >>> # select random article and question
            >>> LONG_ARTICLE = squad_ds[81514]["context"]
            >>> QUESTION = squad_ds[81514]["question"]
            >>> QUESTION
            'During daytime how high can the temperatures reach?'
            >>> inputs = tokenizer(QUESTION, LONG_ARTICLE, return_tensors="pt")
            >>> # long article and question input
            >>> list(inputs["input_ids"].shape)
            [1, 929]
            >>> with torch.no_grad():
            ...     outputs = model(**inputs)
            ...
            >>> answer_start_index = outputs.start_logits.argmax()
            >>> answer_end_index = outputs.end_logits.argmax()
            >>> predict_answer_token_ids = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
            >>> predict_answer_token = tokenizer.decode(predict_answer_token_ids)
            ```

            ```python
            >>> target_start_index, target_end_index = mindspore.tensor([130]), mindspore.tensor([132])
            >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
            >>> loss = outputs.loss
            ```
        """
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        seqlen = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]

        if question_lengths is None and input_ids is not None:
            # assuming input_ids format: <cls> <question> <sep> context <sep>
            question_lengths = ops.argmax(input_ids.eq(self.sep_token_id).int(), dim=-1) + 1
            question_lengths = question_lengths.unsqueeze(1)

        logits_mask = None
        if question_lengths is not None:
            # setting lengths logits to `-inf`
            logits_mask = self.prepare_question_mask(question_lengths, seqlen)
            if token_type_ids is None:
                token_type_ids = ops.ones(*logits_mask.shape, dtype=mindspore.int32) - logits_mask
            logits_mask[:, 0] = False
            logits_mask = logits_mask.unsqueeze(2)

        outputs = self.bert(
            input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        sequence_output = outputs[0]
        logits = self.qa_classifier(sequence_output)

        if logits_mask is not None:
            # removing question tokens from the competition
            logits = logits - logits_mask * 1e6

        start_logits, end_logits = logits.split(1, axis=-1)
        start_logits = start_logits.squeeze(-1)
        end_logits = end_logits.squeeze(-1)

        total_loss = None
        if start_positions is not None and end_positions is not None:
            # If we are on multi-GPU, split add a dimension
            if len(start_positions.shape) > 1:
                start_positions = start_positions.squeeze(-1)
            if len(end_positions.shape) > 1:
                end_positions = end_positions.squeeze(-1)
            # sometimes the start/end positions are outside our model inputs, we ignore these terms
            ignored_index = start_logits.shape[1]
            start_positions = start_positions.clamp(0, ignored_index)
            end_positions = end_positions.clamp(0, ignored_index)

            start_loss = F.cross_entropy(start_logits, start_positions, ignore_index=ignored_index)
            end_loss = F.cross_entropy(end_logits, end_positions, ignore_index=ignored_index)
            total_loss = (start_loss + end_loss) / 2

        if not return_dict:
            output = (start_logits, end_logits) + outputs[2:]
            return ((total_loss,) + output) if total_loss is not None else output

        return BigBirdForQuestionAnsweringModelOutput(
            loss=total_loss,
            start_logits=start_logits,
            end_logits=end_logits,
            pooler_output=outputs.pooler_output,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

    @staticmethod
    def prepare_question_mask(q_lengths: mindspore.Tensor, maxlen: int):
        """
        Prepare a binary mask for the question tokens in the BigBirdForQuestionAnswering class.

        Args:
            q_lengths (mindspore.Tensor): A tensor containing the lengths of the question tokens.
                  Each element represents the length of a question in the batch.
                  Shape: (batch_size,)
            maxlen (int): The maximum length of the question tokens.
                  The mask will be padded with zeros up to this length.

        Returns:
            None

        Raises:
            TypeError: If q_lengths is not of type mindspore.Tensor.
            ValueError: If q_lengths or maxlen is not a positive integer.

        Note:
            This method generates a binary mask for the question tokens in a batch, indicating which tokens are valid based on their lengths.
            The mask will be of shape (batch_size, maxlen), where each element is either 1 (valid token) or 0 (padding token).
            The mask is used in subsequent operations to ignore the padding tokens during computation.
        """
        # q_lengths -> (bz, 1)
        mask = ops.arange(0, maxlen)
        mask = mask.unsqueeze(0)  # -> (1, maxlen)
        mask = ops.where(mask < q_lengths, mindspore.tensor(1), mindspore.tensor(0))
        return mask

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForQuestionAnswering.__init__(config, add_pooling_layer=False)

init

Initializes an instance of the BigBirdForQuestionAnswering class.

PARAMETER DESCRIPTION
self

The instance of the class.

TYPE: object

config

The configuration object containing the model configuration.

TYPE: object

add_pooling_layer

A boolean indicating whether to add a pooling layer. Defaults to False.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
4283
4284
4285
4286
4287
4288
4289
4290
4291
4292
4293
4294
4295
4296
4297
4298
4299
4300
4301
4302
4303
4304
4305
4306
4307
4308
4309
4310
def __init__(self, config, add_pooling_layer=False):
    '''
    __init__

    Initializes an instance of the BigBirdForQuestionAnswering class.

    Args:
        self (object): The instance of the class.
        config (object): The configuration object containing the model configuration.
        add_pooling_layer (bool, optional): A boolean indicating whether to add a pooling layer. Defaults to False.

    Returns:
        None.

    Raises:
        None
    '''
    super().__init__(config)

    config.num_labels = 2
    self.num_labels = config.num_labels
    self.sep_token_id = config.sep_token_id

    self.bert = BigBirdModel(config, add_pooling_layer=add_pooling_layer)
    self.qa_classifier = BigBirdForQuestionAnsweringHead(config)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForQuestionAnswering.forward(input_ids=None, attention_mask=None, question_lengths=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, start_positions=None, end_positions=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
start_positions

Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

TYPE: `mindspore.Tensor` of shape `(batch_size,)`, *optional* DEFAULT: None

end_positions

Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

TYPE: `mindspore.Tensor` of shape `(batch_size,)`, *optional* DEFAULT: None

RETURNS DESCRIPTION
Union[BigBirdForQuestionAnsweringModelOutput, Tuple[Tensor]]

Union[BigBirdForQuestionAnsweringModelOutput, Tuple[mindspore.Tensor]]

Example
>>> import torch
>>> from transformers import AutoTokenizer, BigBirdForQuestionAnswering
>>> from datasets import load_dataset
...
>>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
>>> model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-roberta-base")
>>> squad_ds = load_dataset("squad_v2", split="train")  # doctest: +IGNORE_RESULT
...
>>> # select random article and question
>>> LONG_ARTICLE = squad_ds[81514]["context"]
>>> QUESTION = squad_ds[81514]["question"]
>>> QUESTION
'During daytime how high can the temperatures reach?'
>>> inputs = tokenizer(QUESTION, LONG_ARTICLE, return_tensors="pt")
>>> # long article and question input
>>> list(inputs["input_ids"].shape)
[1, 929]
>>> with torch.no_grad():
...     outputs = model(**inputs)
...
>>> answer_start_index = outputs.start_logits.argmax()
>>> answer_end_index = outputs.end_logits.argmax()
>>> predict_answer_token_ids = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> predict_answer_token = tokenizer.decode(predict_answer_token_ids)
>>> target_start_index, target_end_index = mindspore.tensor([130]), mindspore.tensor([132])
>>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
>>> loss = outputs.loss
Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
4325
4326
4327
4328
4329
4330
4331
4332
4333
4334
4335
4336
4337
4338
4339
4340
4341
4342
4343
4344
4345
4346
4347
4348
4349
4350
4351
4352
4353
4354
4355
4356
4357
4358
4359
4360
4361
4362
4363
4364
4365
4366
4367
4368
4369
4370
4371
4372
4373
4374
4375
4376
4377
4378
4379
4380
4381
4382
4383
4384
4385
4386
4387
4388
4389
4390
4391
4392
4393
4394
4395
4396
4397
4398
4399
4400
4401
4402
4403
4404
4405
4406
4407
4408
4409
4410
4411
4412
4413
4414
4415
4416
4417
4418
4419
4420
4421
4422
4423
4424
4425
4426
4427
4428
4429
4430
4431
4432
4433
4434
4435
4436
4437
4438
4439
4440
4441
4442
4443
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    question_lengths: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    start_positions: Optional[mindspore.Tensor] = None,
    end_positions: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[BigBirdForQuestionAnsweringModelOutput, Tuple[mindspore.Tensor]]:
    r"""
    Args:
        start_positions (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for position (index) of the start of the labelled span for computing the token classification loss.
            Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
            are not taken into account for computing the loss.
        end_positions (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for position (index) of the end of the labelled span for computing the token classification loss.
            Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
            are not taken into account for computing the loss.

    Returns:
        Union[BigBirdForQuestionAnsweringModelOutput, Tuple[mindspore.Tensor]]

    Example:
        ```python
        >>> import torch
        >>> from transformers import AutoTokenizer, BigBirdForQuestionAnswering
        >>> from datasets import load_dataset
        ...
        >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
        >>> model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-roberta-base")
        >>> squad_ds = load_dataset("squad_v2", split="train")  # doctest: +IGNORE_RESULT
        ...
        >>> # select random article and question
        >>> LONG_ARTICLE = squad_ds[81514]["context"]
        >>> QUESTION = squad_ds[81514]["question"]
        >>> QUESTION
        'During daytime how high can the temperatures reach?'
        >>> inputs = tokenizer(QUESTION, LONG_ARTICLE, return_tensors="pt")
        >>> # long article and question input
        >>> list(inputs["input_ids"].shape)
        [1, 929]
        >>> with torch.no_grad():
        ...     outputs = model(**inputs)
        ...
        >>> answer_start_index = outputs.start_logits.argmax()
        >>> answer_end_index = outputs.end_logits.argmax()
        >>> predict_answer_token_ids = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
        >>> predict_answer_token = tokenizer.decode(predict_answer_token_ids)
        ```

        ```python
        >>> target_start_index, target_end_index = mindspore.tensor([130]), mindspore.tensor([132])
        >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
        >>> loss = outputs.loss
        ```
    """
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    seqlen = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]

    if question_lengths is None and input_ids is not None:
        # assuming input_ids format: <cls> <question> <sep> context <sep>
        question_lengths = ops.argmax(input_ids.eq(self.sep_token_id).int(), dim=-1) + 1
        question_lengths = question_lengths.unsqueeze(1)

    logits_mask = None
    if question_lengths is not None:
        # setting lengths logits to `-inf`
        logits_mask = self.prepare_question_mask(question_lengths, seqlen)
        if token_type_ids is None:
            token_type_ids = ops.ones(*logits_mask.shape, dtype=mindspore.int32) - logits_mask
        logits_mask[:, 0] = False
        logits_mask = logits_mask.unsqueeze(2)

    outputs = self.bert(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids,
        position_ids=position_ids,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    sequence_output = outputs[0]
    logits = self.qa_classifier(sequence_output)

    if logits_mask is not None:
        # removing question tokens from the competition
        logits = logits - logits_mask * 1e6

    start_logits, end_logits = logits.split(1, axis=-1)
    start_logits = start_logits.squeeze(-1)
    end_logits = end_logits.squeeze(-1)

    total_loss = None
    if start_positions is not None and end_positions is not None:
        # If we are on multi-GPU, split add a dimension
        if len(start_positions.shape) > 1:
            start_positions = start_positions.squeeze(-1)
        if len(end_positions.shape) > 1:
            end_positions = end_positions.squeeze(-1)
        # sometimes the start/end positions are outside our model inputs, we ignore these terms
        ignored_index = start_logits.shape[1]
        start_positions = start_positions.clamp(0, ignored_index)
        end_positions = end_positions.clamp(0, ignored_index)

        start_loss = F.cross_entropy(start_logits, start_positions, ignore_index=ignored_index)
        end_loss = F.cross_entropy(end_logits, end_positions, ignore_index=ignored_index)
        total_loss = (start_loss + end_loss) / 2

    if not return_dict:
        output = (start_logits, end_logits) + outputs[2:]
        return ((total_loss,) + output) if total_loss is not None else output

    return BigBirdForQuestionAnsweringModelOutput(
        loss=total_loss,
        start_logits=start_logits,
        end_logits=end_logits,
        pooler_output=outputs.pooler_output,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForQuestionAnswering.prepare_question_mask(q_lengths, maxlen) staticmethod

Prepare a binary mask for the question tokens in the BigBirdForQuestionAnswering class.

PARAMETER DESCRIPTION
q_lengths

A tensor containing the lengths of the question tokens. Each element represents the length of a question in the batch. Shape: (batch_size,)

TYPE: Tensor

maxlen

The maximum length of the question tokens. The mask will be padded with zeros up to this length.

TYPE: int

RETURNS DESCRIPTION

None

RAISES DESCRIPTION
TypeError

If q_lengths is not of type mindspore.Tensor.

ValueError

If q_lengths or maxlen is not a positive integer.

Note

This method generates a binary mask for the question tokens in a batch, indicating which tokens are valid based on their lengths. The mask will be of shape (batch_size, maxlen), where each element is either 1 (valid token) or 0 (padding token). The mask is used in subsequent operations to ignore the padding tokens during computation.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
4445
4446
4447
4448
4449
4450
4451
4452
4453
4454
4455
4456
4457
4458
4459
4460
4461
4462
4463
4464
4465
4466
4467
4468
4469
4470
4471
4472
4473
@staticmethod
def prepare_question_mask(q_lengths: mindspore.Tensor, maxlen: int):
    """
    Prepare a binary mask for the question tokens in the BigBirdForQuestionAnswering class.

    Args:
        q_lengths (mindspore.Tensor): A tensor containing the lengths of the question tokens.
              Each element represents the length of a question in the batch.
              Shape: (batch_size,)
        maxlen (int): The maximum length of the question tokens.
              The mask will be padded with zeros up to this length.

    Returns:
        None

    Raises:
        TypeError: If q_lengths is not of type mindspore.Tensor.
        ValueError: If q_lengths or maxlen is not a positive integer.

    Note:
        This method generates a binary mask for the question tokens in a batch, indicating which tokens are valid based on their lengths.
        The mask will be of shape (batch_size, maxlen), where each element is either 1 (valid token) or 0 (padding token).
        The mask is used in subsequent operations to ignore the padding tokens during computation.
    """
    # q_lengths -> (bz, 1)
    mask = ops.arange(0, maxlen)
    mask = mask.unsqueeze(0)  # -> (1, maxlen)
    mask = ops.where(mask < q_lengths, mindspore.tensor(1), mindspore.tensor(0))
    return mask

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForSequenceClassification

Bases: BigBirdPreTrainedModel

BigBirdForSequenceClassification is a class that represents a BigBird model for sequence classification tasks. It extends the functionality of BigBirdPreTrainedModel to include sequence classification capabilities.

This class includes an initialization method 'init' that initializes the model with the provided configuration. It also includes a 'forward' method that forwards the model for inference or training, taking input tensors and optional arguments. The method computes the loss based on the provided labels and returns the sequence classifier output.

The 'forward' method accepts various input tensors such as input_ids, attention_mask, token_type_ids, etc., and computes the sequence classification/regression loss based on the provided labels. The method supports different types of loss calculations depending on the configuration and the number of labels.

The class provides an example usage demonstrating how to load the model, tokenize input text, and perform sequence classification using the model. It also showcases how to compute the loss for a given input and labels.

For detailed usage and examples, refer to the code snippets provided in the docstring.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
3865
3866
3867
3868
3869
3870
3871
3872
3873
3874
3875
3876
3877
3878
3879
3880
3881
3882
3883
3884
3885
3886
3887
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
3923
3924
3925
3926
3927
3928
3929
3930
3931
3932
3933
3934
3935
3936
3937
3938
3939
3940
3941
3942
3943
3944
3945
3946
3947
3948
3949
3950
3951
3952
3953
3954
3955
3956
3957
3958
3959
3960
3961
3962
3963
3964
3965
3966
3967
3968
3969
3970
3971
3972
3973
3974
class BigBirdForSequenceClassification(BigBirdPreTrainedModel):

    """
    BigBirdForSequenceClassification is a class that represents a BigBird model for sequence classification tasks.
    It extends the functionality of BigBirdPreTrainedModel to include sequence classification capabilities.

    This class includes an initialization method '__init__' that initializes the model with the provided configuration.
    It also includes a 'forward' method that forwards the model for inference or training, taking input tensors and optional arguments.
    The method computes the loss based on the provided labels and returns the sequence classifier output.

    The 'forward' method accepts various input tensors such as input_ids, attention_mask, token_type_ids, etc.,
    and computes the sequence classification/regression loss based on the provided labels.
    The method supports different types of loss calculations depending on the configuration and the number of labels.

    The class provides an example usage demonstrating how to load the model, tokenize input text, and perform sequence classification using the model.
    It also showcases how to compute the loss for a given input and labels.

    For detailed usage and examples, refer to the code snippets provided in the docstring.
    """
    def __init__(self, config):
        """
        Initializes a new instance of the BigBirdForSequenceClassification class.

        Args:
            self (BigBirdForSequenceClassification): The instance of the class itself.
            config: The configuration object containing the model configuration.

        Returns:
            None.

        Raises:
            None
        """
        super().__init__(config)
        self.num_labels = config.num_labels
        self.config = config
        self.bert = BigBirdModel(config)
        self.classifier = BigBirdClassificationHead(config)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: mindspore.Tensor = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[SequenceClassifierOutput, Tuple[mindspore.Tensor]]:
        r"""
        Args:
            labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
                Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
                config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
                `config.num_labels > 1` a classification loss is computed (Cross-Entropy).

        Returns:
            Union[SequenceClassifierOutput, Tuple[mindspore.Tensor]]

        Example:
            ```python
            >>> import torch
            >>> from transformers import AutoTokenizer, BigBirdForSequenceClassification
            >>> from datasets import load_dataset
            ...
            >>> tokenizer = AutoTokenizer.from_pretrained("l-yohai/bigbird-roberta-base-mnli")
            >>> model = BigBirdForSequenceClassification.from_pretrained("l-yohai/bigbird-roberta-base-mnli")
            >>> squad_ds = load_dataset("squad_v2", split="train")  # doctest: +IGNORE_RESULT
            ...
            >>> LONG_ARTICLE = squad_ds[81514]["context"]
            >>> inputs = tokenizer(LONG_ARTICLE, return_tensors="pt")
            >>> # long input article
            >>> list(inputs["input_ids"].shape)
            [1, 919]
            >>> with torch.no_grad():
            ...     logits = model(**inputs).logits
            >>> predicted_class_id = logits.argmax().item()
            >>> model.config.id2label[predicted_class_id]
            'LABEL_0'
            ```

            ```python
            >>> num_labels = len(model.config.id2label)
            >>> model = BigBirdForSequenceClassification.from_pretrained(
            ...     "l-yohai/bigbird-roberta-base-mnli", num_labels=num_labels
            ... )
            >>> labels = mindspore.tensor(1)
            >>> loss = model(**inputs, labels=labels).loss
            >>> round(loss.item(), 2)
            1.13
            ```
        """
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        outputs = self.bert(
            input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        sequence_output = outputs[0]
        logits = self.classifier(sequence_output)

        loss = None
        if labels is not None:
            if self.config.problem_type is None:
                if self.num_labels == 1:
                    self.config.problem_type = "regression"
                elif self.num_labels > 1 and labels.dtype in (mindspore.int32, mindspore.int64):
                    self.config.problem_type = "single_label_classification"
                else:
                    self.config.problem_type = "multi_label_classification"

            if self.config.problem_type == "regression":
                if self.num_labels == 1:
                    loss = ops.mse_loss(logits.squeeze(), labels.squeeze())
                else:
                    loss = ops.mse_loss(logits, labels)
            elif self.config.problem_type == "single_label_classification":
                loss = F.cross_entropy(logits.view(-1, self.num_labels), labels.view(-1))
            elif self.config.problem_type == "multi_label_classification":
                loss = ops.binary_cross_entropy_with_logits(logits, labels)

        if not return_dict:
            output = (logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return SequenceClassifierOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForSequenceClassification.__init__(config)

Initializes a new instance of the BigBirdForSequenceClassification class.

PARAMETER DESCRIPTION
self

The instance of the class itself.

TYPE: BigBirdForSequenceClassification

config

The configuration object containing the model configuration.

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
3865
3866
3867
3868
3869
3870
def __init__(self, config):
    """
    Initializes a new instance of the BigBirdForSequenceClassification class.

    Args:
        self (BigBirdForSequenceClassification): The instance of the class itself.
        config: The configuration object containing the model configuration.

    Returns:
        None.

    Raises:
        None
    """
    super().__init__(config)
    self.num_labels = config.num_labels
    self.config = config
    self.bert = BigBirdModel(config)
    self.classifier = BigBirdClassificationHead(config)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForSequenceClassification.forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
labels

Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

TYPE: `mindspore.Tensor` of shape `(batch_size,)`, *optional* DEFAULT: None

RETURNS DESCRIPTION
Union[SequenceClassifierOutput, Tuple[Tensor]]

Union[SequenceClassifierOutput, Tuple[mindspore.Tensor]]

Example
>>> import torch
>>> from transformers import AutoTokenizer, BigBirdForSequenceClassification
>>> from datasets import load_dataset
...
>>> tokenizer = AutoTokenizer.from_pretrained("l-yohai/bigbird-roberta-base-mnli")
>>> model = BigBirdForSequenceClassification.from_pretrained("l-yohai/bigbird-roberta-base-mnli")
>>> squad_ds = load_dataset("squad_v2", split="train")  # doctest: +IGNORE_RESULT
...
>>> LONG_ARTICLE = squad_ds[81514]["context"]
>>> inputs = tokenizer(LONG_ARTICLE, return_tensors="pt")
>>> # long input article
>>> list(inputs["input_ids"].shape)
[1, 919]
>>> with torch.no_grad():
...     logits = model(**inputs).logits
>>> predicted_class_id = logits.argmax().item()
>>> model.config.id2label[predicted_class_id]
'LABEL_0'
>>> num_labels = len(model.config.id2label)
>>> model = BigBirdForSequenceClassification.from_pretrained(
...     "l-yohai/bigbird-roberta-base-mnli", num_labels=num_labels
... )
>>> labels = mindspore.tensor(1)
>>> loss = model(**inputs, labels=labels).loss
>>> round(loss.item(), 2)
1.13
Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3872
3873
3874
3875
3876
3877
3878
3879
3880
3881
3882
3883
3884
3885
3886
3887
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
3923
3924
3925
3926
3927
3928
3929
3930
3931
3932
3933
3934
3935
3936
3937
3938
3939
3940
3941
3942
3943
3944
3945
3946
3947
3948
3949
3950
3951
3952
3953
3954
3955
3956
3957
3958
3959
3960
3961
3962
3963
3964
3965
3966
3967
3968
3969
3970
3971
3972
3973
3974
def forward(
    self,
    input_ids: mindspore.Tensor = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[SequenceClassifierOutput, Tuple[mindspore.Tensor]]:
    r"""
    Args:
        labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
            config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
            `config.num_labels > 1` a classification loss is computed (Cross-Entropy).

    Returns:
        Union[SequenceClassifierOutput, Tuple[mindspore.Tensor]]

    Example:
        ```python
        >>> import torch
        >>> from transformers import AutoTokenizer, BigBirdForSequenceClassification
        >>> from datasets import load_dataset
        ...
        >>> tokenizer = AutoTokenizer.from_pretrained("l-yohai/bigbird-roberta-base-mnli")
        >>> model = BigBirdForSequenceClassification.from_pretrained("l-yohai/bigbird-roberta-base-mnli")
        >>> squad_ds = load_dataset("squad_v2", split="train")  # doctest: +IGNORE_RESULT
        ...
        >>> LONG_ARTICLE = squad_ds[81514]["context"]
        >>> inputs = tokenizer(LONG_ARTICLE, return_tensors="pt")
        >>> # long input article
        >>> list(inputs["input_ids"].shape)
        [1, 919]
        >>> with torch.no_grad():
        ...     logits = model(**inputs).logits
        >>> predicted_class_id = logits.argmax().item()
        >>> model.config.id2label[predicted_class_id]
        'LABEL_0'
        ```

        ```python
        >>> num_labels = len(model.config.id2label)
        >>> model = BigBirdForSequenceClassification.from_pretrained(
        ...     "l-yohai/bigbird-roberta-base-mnli", num_labels=num_labels
        ... )
        >>> labels = mindspore.tensor(1)
        >>> loss = model(**inputs, labels=labels).loss
        >>> round(loss.item(), 2)
        1.13
        ```
    """
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    outputs = self.bert(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids,
        position_ids=position_ids,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    sequence_output = outputs[0]
    logits = self.classifier(sequence_output)

    loss = None
    if labels is not None:
        if self.config.problem_type is None:
            if self.num_labels == 1:
                self.config.problem_type = "regression"
            elif self.num_labels > 1 and labels.dtype in (mindspore.int32, mindspore.int64):
                self.config.problem_type = "single_label_classification"
            else:
                self.config.problem_type = "multi_label_classification"

        if self.config.problem_type == "regression":
            if self.num_labels == 1:
                loss = ops.mse_loss(logits.squeeze(), labels.squeeze())
            else:
                loss = ops.mse_loss(logits, labels)
        elif self.config.problem_type == "single_label_classification":
            loss = F.cross_entropy(logits.view(-1, self.num_labels), labels.view(-1))
        elif self.config.problem_type == "multi_label_classification":
            loss = ops.binary_cross_entropy_with_logits(logits, labels)

    if not return_dict:
        output = (logits,) + outputs[2:]
        return ((loss,) + output) if loss is not None else output

    return SequenceClassifierOutput(
        loss=loss,
        logits=logits,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForTokenClassification

Bases: BigBirdPreTrainedModel

BigBirdForTokenClassification is a token classification model based on the BigBird architecture. It inherits from BigBirdPreTrainedModel and is designed for token-level classification tasks, such as Named Entity Recognition or Part-of-Speech tagging.

The class's forwardor initializes the model with the provided configuration and sets up the necessary components, including the BigBirdModel, dropout layers, and classifier. It also calls the post_init method for additional setup.

The forward method takes input tensors and optional arguments for various model outputs, such as attentions and hidden states. It returns the token classification output, including logits for each token, and computes the token classification loss if labels are provided.

The labels parameter in the forward method is an optional tensor containing the target labels for token classification. The indices in the labels tensor should be in the range [0, num_labels - 1].

If return_dict is set to False, the method returns a tuple containing the token logits and additional model outputs. If return_dict is True, the method returns a TokenClassifierOutput object containing the loss, logits, hidden states, and attentions.

Note

This docstring is generated based on the provided code snippet and may need to be updated with additional details about the class and its methods.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
4084
4085
4086
4087
4088
4089
4090
4091
4092
4093
4094
4095
4096
4097
4098
4099
4100
4101
4102
4103
4104
4105
4106
4107
4108
4109
4110
4111
4112
4113
4114
4115
4116
4117
4118
4119
4120
4121
4122
4123
4124
4125
4126
4127
4128
4129
4130
4131
4132
4133
4134
4135
4136
4137
4138
4139
4140
4141
4142
4143
4144
4145
4146
4147
4148
4149
4150
4151
4152
4153
4154
4155
4156
4157
4158
4159
4160
4161
4162
4163
4164
4165
4166
4167
4168
4169
4170
4171
4172
4173
4174
4175
4176
4177
4178
4179
4180
4181
4182
4183
4184
4185
4186
4187
4188
class BigBirdForTokenClassification(BigBirdPreTrainedModel):

    """
    BigBirdForTokenClassification is a token classification model based on the BigBird architecture.
    It inherits from BigBirdPreTrainedModel and is designed for token-level classification tasks, such as Named
    Entity Recognition or Part-of-Speech tagging.

    The class's forwardor initializes the model with the provided configuration and sets up the necessary components,
    including the BigBirdModel, dropout layers, and classifier. It also calls the post_init method for additional setup.

    The forward method takes input tensors and optional arguments for various model outputs, such as attentions and hidden states.
    It returns the token classification output, including logits for each token, and computes the token classification loss if labels are provided.

    The labels parameter in the forward method is an optional tensor containing the target labels for token classification.
    The indices in the labels tensor should be in the range [0, num_labels - 1].

    If return_dict is set to False, the method returns a tuple containing the token logits and additional model outputs.
    If return_dict is True, the method returns a TokenClassifierOutput object containing the loss, logits, hidden states,
    and attentions.

    Note:
        This docstring is generated based on the provided code snippet and may need to be updated with additional
        details about the class and its methods.
    """
    def __init__(self, config):
        """
        Initializes a new instance of the BigBirdForTokenClassification class.

        Args:
            self: The object itself.
            config: An instance of the BigBirdConfig class that contains the configuration parameters for the model.
                It should have the following attributes:

                - num_labels (int): The number of labels for token classification.

        Returns:
            None

        Raises:
            None
        """
        super().__init__(config)
        self.num_labels = config.num_labels

        self.bert = BigBirdModel(config)
        classifier_dropout = (
            config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
        )
        self.dropout = nn.Dropout(p=classifier_dropout)
        self.classifier = nn.Linear(config.hidden_size, config.num_labels)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: mindspore.Tensor = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[TokenClassifierOutput, Tuple[mindspore.Tensor]]:
        r"""
        Args:
            labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
        """
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        outputs = self.bert(
            input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        sequence_output = outputs[0]

        sequence_output = self.dropout(sequence_output)
        logits = self.classifier(sequence_output)

        loss = None
        if labels is not None:
            loss = F.cross_entropy(logits.view(-1, self.num_labels), labels.view(-1))

        if not return_dict:
            output = (logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return TokenClassifierOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForTokenClassification.__init__(config)

Initializes a new instance of the BigBirdForTokenClassification class.

PARAMETER DESCRIPTION
self

The object itself.

config

An instance of the BigBirdConfig class that contains the configuration parameters for the model. It should have the following attributes:

  • num_labels (int): The number of labels for token classification.

RETURNS DESCRIPTION

None

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
4108
4109
4110
4111
4112
4113
4114
4115
4116
4117
4118
4119
4120
4121
4122
4123
4124
4125
4126
4127
4128
4129
4130
4131
4132
4133
4134
4135
4136
def __init__(self, config):
    """
    Initializes a new instance of the BigBirdForTokenClassification class.

    Args:
        self: The object itself.
        config: An instance of the BigBirdConfig class that contains the configuration parameters for the model.
            It should have the following attributes:

            - num_labels (int): The number of labels for token classification.

    Returns:
        None

    Raises:
        None
    """
    super().__init__(config)
    self.num_labels = config.num_labels

    self.bert = BigBirdModel(config)
    classifier_dropout = (
        config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
    )
    self.dropout = nn.Dropout(p=classifier_dropout)
    self.classifier = nn.Linear(config.hidden_size, config.num_labels)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdForTokenClassification.forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
labels

Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
4138
4139
4140
4141
4142
4143
4144
4145
4146
4147
4148
4149
4150
4151
4152
4153
4154
4155
4156
4157
4158
4159
4160
4161
4162
4163
4164
4165
4166
4167
4168
4169
4170
4171
4172
4173
4174
4175
4176
4177
4178
4179
4180
4181
4182
4183
4184
4185
4186
4187
4188
def forward(
    self,
    input_ids: mindspore.Tensor = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[TokenClassifierOutput, Tuple[mindspore.Tensor]]:
    r"""
    Args:
        labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
    """
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    outputs = self.bert(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids,
        position_ids=position_ids,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    sequence_output = outputs[0]

    sequence_output = self.dropout(sequence_output)
    logits = self.classifier(sequence_output)

    loss = None
    if labels is not None:
        loss = F.cross_entropy(logits.view(-1, self.num_labels), labels.view(-1))

    if not return_dict:
        output = (logits,) + outputs[2:]
        return ((loss,) + output) if loss is not None else output

    return TokenClassifierOutput(
        loss=loss,
        logits=logits,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdLayer

Bases: Module

This class represents a layer of the BigBird model, which is used for attention-based computations. It inherits from the nn.Module class.

ATTRIBUTE DESCRIPTION
config

An object that stores the configuration settings for the BigBirdLayer.

attention_type

A string representing the type of attention used by the layer.

chunk_size_feed_forward

An integer specifying the chunk size for feed-forward computations.

seq_len_dim

An integer indicating the dimension along which the sequence length is defined.

attention

An instance of the BigBirdAttention class, responsible for performing attention computations.

is_decoder

A boolean value indicating whether the BigBirdLayer is used as a decoder model.

add_cross_attention

A boolean value specifying whether cross attention is added to the model.

crossattention

An instance of the BigBirdAttention class used for cross-attention computations.

intermediate

An instance of the BigBirdIntermediate class, responsible for intermediate computations.

output

An instance of the BigBirdOutput class, used for the final output computations.

METHOD DESCRIPTION
set_attention_type

Sets the attention type to either 'original_full' or 'block_sparse'.

forward

Constructs the layer by performing attention-based computations and returning the outputs.

feed_forward_chunk

Applies the feed-forward computation on the attention output.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
class BigBirdLayer(nn.Module):

    """
    This class represents a layer of the BigBird model, which is used for attention-based computations. It inherits from the nn.Module class.

    Attributes:
        config: An object that stores the configuration settings for the BigBirdLayer.
        attention_type: A string representing the type of attention used by the layer.
        chunk_size_feed_forward: An integer specifying the chunk size for feed-forward computations.
        seq_len_dim: An integer indicating the dimension along which the sequence length is defined.
        attention: An instance of the BigBirdAttention class, responsible for performing attention computations.
        is_decoder: A boolean value indicating whether the BigBirdLayer is used as a decoder model.
        add_cross_attention: A boolean value specifying whether cross attention is added to the model.
        crossattention: An instance of the BigBirdAttention class used for cross-attention computations.
        intermediate: An instance of the BigBirdIntermediate class, responsible for intermediate computations.
        output: An instance of the BigBirdOutput class, used for the final output computations.

    Methods:
        set_attention_type: Sets the attention type to either 'original_full' or 'block_sparse'.
        forward: Constructs the layer by performing attention-based computations and returning the outputs.
        feed_forward_chunk: Applies the feed-forward computation on the attention output.

    """
    def __init__(self, config, seed=None):
        """
        Initializes a BigBirdLayer instance with the provided configuration and optional seed.

        Args:
            self: The BigBirdLayer instance itself.
            config: An object containing configuration settings for the BigBirdLayer.
                This parameter is required and should not be None.
            seed: An integer used for random seed initialization. Default is None.

        Returns:
            None.

        Raises:
            TypeError: If add_cross_attention is True but the model is not set as a decoder model.
        """
        super().__init__()
        self.config = config
        self.attention_type = config.attention_type
        self.chunk_size_feed_forward = config.chunk_size_feed_forward
        self.seq_len_dim = 1
        self.attention = BigBirdAttention(config, seed=seed)
        self.is_decoder = config.is_decoder
        self.add_cross_attention = config.add_cross_attention
        if self.add_cross_attention:
            if not self.is_decoder:
                raise TypeError(f"{self} should be used as a decoder model if cross attention is added")
            self.crossattention = BigBirdAttention(config)
        self.intermediate = BigBirdIntermediate(config)
        self.output = BigBirdOutput(config)

    def set_attention_type(self, value: str):
        """
        Sets the attention type for the BigBirdLayer.

        Args:
            self (BigBirdLayer): The instance of the BigBirdLayer class.
            value (str): The attention type to be set. It can only be 'original_full' or 'block_sparse'.

        Returns:
            None.

        Raises:
            ValueError: If the provided attention type is not 'original_full' or 'block_sparse'.

        This method sets the attention type for the BigBirdLayer. The attention type determines the type of attention
        mechanism used in the layer.

        If the provided attention type is not 'original_full' or 'block_sparse', a ValueError is raised.
        Otherwise, if the provided attention type is the same as the current attention type, the method returns
        without making any changes. Otherwise, the attention type is updated, and the set_attention_type method is
        called on the attention object of the layer.

        If the layer has cross-attention enabled, the set_attention_type method is also called on the crossattention object.
        """
        if value not in ["original_full", "block_sparse"]:
            raise ValueError(
                f"attention_type can only be set to either 'original_full' or 'block_sparse', but is {value}"
            )
        # attention type is already correctly set
        if value == self.attention_type:
            return
        self.attention_type = value
        self.attention.set_attention_type(value)

        if self.add_cross_attention:
            self.crossattention.set_attention_type(value)

    def forward(
        self,
        hidden_states,
        attention_mask=None,
        head_mask=None,
        encoder_hidden_states=None,
        encoder_attention_mask=None,
        band_mask=None,
        from_mask=None,
        to_mask=None,
        blocked_encoder_mask=None,
        past_key_value=None,
        output_attentions=False,
    ):
        '''
        Constructs the BigBirdLayer.

        Args:
            self: The instance of the BigBirdLayer class.
            hidden_states (Tensor): The input hidden states.
            attention_mask (Tensor, optional): The attention mask for the self-attention mechanism. Default is None.
            head_mask (Tensor, optional): The mask for the attention heads. Default is None.
            encoder_hidden_states (Tensor, optional): The hidden states of the encoder. Default is None.
            encoder_attention_mask (Tensor, optional): The attention mask for the encoder. Default is None.
            band_mask (Tensor, optional): The band mask for attention. Default is None.
            from_mask (Tensor, optional): The 'from' mask for attention. Default is None.
            to_mask (Tensor, optional): The 'to' mask for attention. Default is None.
            blocked_encoder_mask (Tensor, optional): The mask for blocked encoder. Default is None.
            past_key_value (Tensor, optional): The past key-value pair. Default is None.
            output_attentions (bool): Whether to output attentions. Default is False.

        Returns:
            None.

        Raises:
            ValueError: If `encoder_hidden_states` are passed and cross-attention layers are not instantiated
                by setting `config.add_cross_attention=True`.
        '''
        # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
        self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
        self_attention_outputs = self.attention(
            hidden_states,
            attention_mask,
            head_mask,
            encoder_hidden_states=encoder_hidden_states,
            encoder_attention_mask=encoder_attention_mask,
            past_key_value=self_attn_past_key_value,
            output_attentions=output_attentions,
            band_mask=band_mask,
            from_mask=from_mask,
            to_mask=to_mask,
            from_blocked_mask=blocked_encoder_mask,
            to_blocked_mask=blocked_encoder_mask,
        )
        attention_output = self_attention_outputs[0]

        # if decoder, the last output is tuple of self-attn cache
        if self.is_decoder:
            outputs = self_attention_outputs[1:-1]
            present_key_value = self_attention_outputs[-1]
        else:
            outputs = self_attention_outputs[1:]  # add self attentions if we output attention weights

        cross_attn_present_key_value = None
        if self.is_decoder and encoder_hidden_states is not None:
            if not hasattr(self, "crossattention"):
                raise ValueError(
                    f"If `encoder_hidden_states` are passed, {self} has to be instantiated with                    "
                    " cross-attention layers by setting `config.add_cross_attention=True`"
                )

            # cross_attn cached key/values tuple is at positions 3,4 of past_key_value tuple
            cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None
            cross_attention_outputs = self.crossattention(
                attention_output,
                attention_mask,
                head_mask,
                encoder_hidden_states,
                encoder_attention_mask,
                cross_attn_past_key_value,
                output_attentions,
            )
            attention_output = cross_attention_outputs[0]
            outputs = outputs + cross_attention_outputs[1:-1]  # add cross attentions if we output attention weights

            # add cross-attn cache to positions 3,4 of present_key_value tuple
            cross_attn_present_key_value = cross_attention_outputs[-1]
            present_key_value = present_key_value + cross_attn_present_key_value

        layer_output = apply_chunking_to_forward(
            self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output
        )

        outputs = (layer_output,) + outputs

        # if decoder, return the attn key/values as the last output
        if self.is_decoder:
            outputs = outputs + (present_key_value,)

        return outputs

    def feed_forward_chunk(self, attention_output):
        """
        Method: feed_forward_chunk

        Description:
        Performs a feed-forward chunk operation within the BigBirdLayer class.

        Args:
            self (BigBirdLayer): The instance of the BigBirdLayer class.
            attention_output (Tensor): The input tensor representing the attention output.

        Returns:
            Tensor: The output tensor after the feed-forward chunk operation.

        Raises:
            ValueError: If the input tensor dimensions are not compatible.
            RuntimeError: If there is an issue during the intermediate or output computations.
        """
        intermediate_output = self.intermediate(attention_output)
        layer_output = self.output(intermediate_output, attention_output)
        return layer_output

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdLayer.__init__(config, seed=None)

Initializes a BigBirdLayer instance with the provided configuration and optional seed.

PARAMETER DESCRIPTION
self

The BigBirdLayer instance itself.

config

An object containing configuration settings for the BigBirdLayer. This parameter is required and should not be None.

seed

An integer used for random seed initialization. Default is None.

DEFAULT: None

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
TypeError

If add_cross_attention is True but the model is not set as a decoder model.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
def __init__(self, config, seed=None):
    """
    Initializes a BigBirdLayer instance with the provided configuration and optional seed.

    Args:
        self: The BigBirdLayer instance itself.
        config: An object containing configuration settings for the BigBirdLayer.
            This parameter is required and should not be None.
        seed: An integer used for random seed initialization. Default is None.

    Returns:
        None.

    Raises:
        TypeError: If add_cross_attention is True but the model is not set as a decoder model.
    """
    super().__init__()
    self.config = config
    self.attention_type = config.attention_type
    self.chunk_size_feed_forward = config.chunk_size_feed_forward
    self.seq_len_dim = 1
    self.attention = BigBirdAttention(config, seed=seed)
    self.is_decoder = config.is_decoder
    self.add_cross_attention = config.add_cross_attention
    if self.add_cross_attention:
        if not self.is_decoder:
            raise TypeError(f"{self} should be used as a decoder model if cross attention is added")
        self.crossattention = BigBirdAttention(config)
    self.intermediate = BigBirdIntermediate(config)
    self.output = BigBirdOutput(config)

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdLayer.feed_forward_chunk(attention_output)

Description: Performs a feed-forward chunk operation within the BigBirdLayer class.

PARAMETER DESCRIPTION
self

The instance of the BigBirdLayer class.

TYPE: BigBirdLayer

attention_output

The input tensor representing the attention output.

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

The output tensor after the feed-forward chunk operation.

RAISES DESCRIPTION
ValueError

If the input tensor dimensions are not compatible.

RuntimeError

If there is an issue during the intermediate or output computations.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
def feed_forward_chunk(self, attention_output):
    """
    Method: feed_forward_chunk

    Description:
    Performs a feed-forward chunk operation within the BigBirdLayer class.

    Args:
        self (BigBirdLayer): The instance of the BigBirdLayer class.
        attention_output (Tensor): The input tensor representing the attention output.

    Returns:
        Tensor: The output tensor after the feed-forward chunk operation.

    Raises:
        ValueError: If the input tensor dimensions are not compatible.
        RuntimeError: If there is an issue during the intermediate or output computations.
    """
    intermediate_output = self.intermediate(attention_output)
    layer_output = self.output(intermediate_output, attention_output)
    return layer_output

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdLayer.forward(hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, band_mask=None, from_mask=None, to_mask=None, blocked_encoder_mask=None, past_key_value=None, output_attentions=False)

Constructs the BigBirdLayer.

PARAMETER DESCRIPTION
self

The instance of the BigBirdLayer class.

hidden_states

The input hidden states.

TYPE: Tensor

attention_mask

The attention mask for the self-attention mechanism. Default is None.

TYPE: Tensor DEFAULT: None

head_mask

The mask for the attention heads. Default is None.

TYPE: Tensor DEFAULT: None

encoder_hidden_states

The hidden states of the encoder. Default is None.

TYPE: Tensor DEFAULT: None

encoder_attention_mask

The attention mask for the encoder. Default is None.

TYPE: Tensor DEFAULT: None

band_mask

The band mask for attention. Default is None.

TYPE: Tensor DEFAULT: None

from_mask

The 'from' mask for attention. Default is None.

TYPE: Tensor DEFAULT: None

to_mask

The 'to' mask for attention. Default is None.

TYPE: Tensor DEFAULT: None

blocked_encoder_mask

The mask for blocked encoder. Default is None.

TYPE: Tensor DEFAULT: None

past_key_value

The past key-value pair. Default is None.

TYPE: Tensor DEFAULT: None

output_attentions

Whether to output attentions. Default is False.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
ValueError

If encoder_hidden_states are passed and cross-attention layers are not instantiated by setting config.add_cross_attention=True.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
def forward(
    self,
    hidden_states,
    attention_mask=None,
    head_mask=None,
    encoder_hidden_states=None,
    encoder_attention_mask=None,
    band_mask=None,
    from_mask=None,
    to_mask=None,
    blocked_encoder_mask=None,
    past_key_value=None,
    output_attentions=False,
):
    '''
    Constructs the BigBirdLayer.

    Args:
        self: The instance of the BigBirdLayer class.
        hidden_states (Tensor): The input hidden states.
        attention_mask (Tensor, optional): The attention mask for the self-attention mechanism. Default is None.
        head_mask (Tensor, optional): The mask for the attention heads. Default is None.
        encoder_hidden_states (Tensor, optional): The hidden states of the encoder. Default is None.
        encoder_attention_mask (Tensor, optional): The attention mask for the encoder. Default is None.
        band_mask (Tensor, optional): The band mask for attention. Default is None.
        from_mask (Tensor, optional): The 'from' mask for attention. Default is None.
        to_mask (Tensor, optional): The 'to' mask for attention. Default is None.
        blocked_encoder_mask (Tensor, optional): The mask for blocked encoder. Default is None.
        past_key_value (Tensor, optional): The past key-value pair. Default is None.
        output_attentions (bool): Whether to output attentions. Default is False.

    Returns:
        None.

    Raises:
        ValueError: If `encoder_hidden_states` are passed and cross-attention layers are not instantiated
            by setting `config.add_cross_attention=True`.
    '''
    # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
    self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
    self_attention_outputs = self.attention(
        hidden_states,
        attention_mask,
        head_mask,
        encoder_hidden_states=encoder_hidden_states,
        encoder_attention_mask=encoder_attention_mask,
        past_key_value=self_attn_past_key_value,
        output_attentions=output_attentions,
        band_mask=band_mask,
        from_mask=from_mask,
        to_mask=to_mask,
        from_blocked_mask=blocked_encoder_mask,
        to_blocked_mask=blocked_encoder_mask,
    )
    attention_output = self_attention_outputs[0]

    # if decoder, the last output is tuple of self-attn cache
    if self.is_decoder:
        outputs = self_attention_outputs[1:-1]
        present_key_value = self_attention_outputs[-1]
    else:
        outputs = self_attention_outputs[1:]  # add self attentions if we output attention weights

    cross_attn_present_key_value = None
    if self.is_decoder and encoder_hidden_states is not None:
        if not hasattr(self, "crossattention"):
            raise ValueError(
                f"If `encoder_hidden_states` are passed, {self} has to be instantiated with                    "
                " cross-attention layers by setting `config.add_cross_attention=True`"
            )

        # cross_attn cached key/values tuple is at positions 3,4 of past_key_value tuple
        cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None
        cross_attention_outputs = self.crossattention(
            attention_output,
            attention_mask,
            head_mask,
            encoder_hidden_states,
            encoder_attention_mask,
            cross_attn_past_key_value,
            output_attentions,
        )
        attention_output = cross_attention_outputs[0]
        outputs = outputs + cross_attention_outputs[1:-1]  # add cross attentions if we output attention weights

        # add cross-attn cache to positions 3,4 of present_key_value tuple
        cross_attn_present_key_value = cross_attention_outputs[-1]
        present_key_value = present_key_value + cross_attn_present_key_value

    layer_output = apply_chunking_to_forward(
        self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output
    )

    outputs = (layer_output,) + outputs

    # if decoder, return the attn key/values as the last output
    if self.is_decoder:
        outputs = outputs + (present_key_value,)

    return outputs

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdLayer.set_attention_type(value)

Sets the attention type for the BigBirdLayer.

PARAMETER DESCRIPTION
self

The instance of the BigBirdLayer class.

TYPE: BigBirdLayer

value

The attention type to be set. It can only be 'original_full' or 'block_sparse'.

TYPE: str

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
ValueError

If the provided attention type is not 'original_full' or 'block_sparse'.

This method sets the attention type for the BigBirdLayer. The attention type determines the type of attention mechanism used in the layer.

If the provided attention type is not 'original_full' or 'block_sparse', a ValueError is raised. Otherwise, if the provided attention type is the same as the current attention type, the method returns without making any changes. Otherwise, the attention type is updated, and the set_attention_type method is called on the attention object of the layer.

If the layer has cross-attention enabled, the set_attention_type method is also called on the crossattention object.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
def set_attention_type(self, value: str):
    """
    Sets the attention type for the BigBirdLayer.

    Args:
        self (BigBirdLayer): The instance of the BigBirdLayer class.
        value (str): The attention type to be set. It can only be 'original_full' or 'block_sparse'.

    Returns:
        None.

    Raises:
        ValueError: If the provided attention type is not 'original_full' or 'block_sparse'.

    This method sets the attention type for the BigBirdLayer. The attention type determines the type of attention
    mechanism used in the layer.

    If the provided attention type is not 'original_full' or 'block_sparse', a ValueError is raised.
    Otherwise, if the provided attention type is the same as the current attention type, the method returns
    without making any changes. Otherwise, the attention type is updated, and the set_attention_type method is
    called on the attention object of the layer.

    If the layer has cross-attention enabled, the set_attention_type method is also called on the crossattention object.
    """
    if value not in ["original_full", "block_sparse"]:
        raise ValueError(
            f"attention_type can only be set to either 'original_full' or 'block_sparse', but is {value}"
        )
    # attention type is already correctly set
    if value == self.attention_type:
        return
    self.attention_type = value
    self.attention.set_attention_type(value)

    if self.add_cross_attention:
        self.crossattention.set_attention_type(value)

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdModel

Bases: BigBirdPreTrainedModel

The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.

To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
class BigBirdModel(BigBirdPreTrainedModel):
    """

    The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
    cross-attention is added between the self-attention layers, following the architecture described in [Attention is
    all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
    Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.

    To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
    to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
    `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass.
    """
    def __init__(self, config, add_pooling_layer=True):
        """
        Initializes a new instance of the BigBirdModel class.

        Args:
            self (BigBirdModel): The instance of the class.
            config (object): The configuration object containing model parameters.
            add_pooling_layer (bool): Flag to indicate whether to add a pooling layer.

        Returns:
            None.

        Raises:
            None
        """
        super().__init__(config)
        self.attention_type = self.config.attention_type
        self.config = config

        self.block_size = self.config.block_size

        self.embeddings = BigBirdEmbeddings(config)
        self.encoder = BigBirdEncoder(config)

        if add_pooling_layer:
            self.pooler = nn.Linear(config.hidden_size, config.hidden_size)
            self.activation = nn.Tanh()
        else:
            self.pooler = None
            self.activation = None

        if self.attention_type != "original_full" and config.add_cross_attention:
            logger.warning(
                "When using `BigBirdForCausalLM` as decoder, then `attention_type` must be `original_full`. Setting"
                " `attention_type=original_full`"
            )
            self.set_attention_type("original_full")

        # Initialize weights and apply final processing
        self.post_init()

    def get_input_embeddings(self):
        """
        Method to retrieve the input embeddings from the BigBirdModel.

        Args:
            self: An instance of the BigBirdModel class.

        Returns:
            word_embeddings: The method returns the word embeddings stored in the BigBirdModel instance.

        Raises:
            word_embeddings
        """
        return self.embeddings.word_embeddings

    def set_input_embeddings(self, value):
        """
        Set the input embeddings of the BigBirdModel.

        Args:
            self (BigBirdModel): An instance of the BigBirdModel class.
            value: The new input embeddings to be set. This should be an instance of a compatible embedding object.

        Returns:
            None.

        Raises:
            None.

        This method is used to update the input embeddings of the BigBirdModel with a new set of embeddings.
        It takes in the instance of the BigBirdModel class and the new embeddings to be set as parameters.
        The 'value' parameter should be an instance of a compatible embedding object.

        Note that changing the input embeddings can have a significant impact on the model's performance, s
        o it should be done carefully and with consideration of the specific task and data being used.

        This method does not return any value, as it directly modifies the input embeddings of the BigBirdModel instance.

        Example:
            ```python
            >>> model = BigBirdModel()
            >>> embeddings = WordEmbeddings()
            >>> model.set_input_embeddings(embeddings)
            ```
        """
        self.embeddings.word_embeddings = value

    def set_attention_type(self, value: str):
        """
        Method to set the attention type for the BigBirdModel.

        Args:
            self: Instance of the BigBirdModel class.
            value (str): The specified attention type to set. It can only be either 'original_full' or 'block_sparse'.

        Returns:
            None.

        Raises:
            ValueError: If the value provided is not 'original_full' or 'block_sparse'.
        """
        if value not in ["original_full", "block_sparse"]:
            raise ValueError(
                f"attention_type can only be set to either 'original_full' or 'block_sparse', but is {value}"
            )
        # attention type is already correctly set
        if value == self.attention_type:
            return
        self.attention_type = value
        self.encoder.set_attention_type(value)

    def forward(
        self,
        input_ids: mindspore.Tensor = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        encoder_hidden_states: Optional[mindspore.Tensor] = None,
        encoder_attention_mask: Optional[mindspore.Tensor] = None,
        past_key_values: Optional[Tuple[Tuple[mindspore.Tensor]]] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[BaseModelOutputWithPoolingAndCrossAttentions, Tuple[mindspore.Tensor]]:
        r"""
        Args:
            encoder_hidden_states  (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
                Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
                the model is configured as a decoder.
            encoder_attention_mask (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
                the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:

                - 1 for tokens that are **not masked**,
                - 0 for tokens that are **masked**.

            past_key_values (`tuple(tuple(mindspore.Tensor))` of length `config.n_layers` with each tuple having 4 tensors
                of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
                Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.

                If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
                don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
                `decoder_input_ids` of shape `(batch_size, sequence_length)`.
            use_cache (`bool`, *optional*):
                If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
                `past_key_values`).
        """
        output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
        output_hidden_states = (
            output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
        )
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        if self.config.is_decoder:
            use_cache = use_cache if use_cache is not None else self.config.use_cache
        else:
            use_cache = False

        if input_ids is not None and inputs_embeds is not None:
            raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
        if input_ids is not None:
            self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
            input_shape = input_ids.shape
        elif inputs_embeds is not None:
            input_shape = inputs_embeds.shape[:-1]
        else:
            raise ValueError("You have to specify either input_ids or inputs_embeds")

        batch_size, seq_length = input_shape

        # past_key_values_length
        past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0

        if attention_mask is None:
            attention_mask = ops.ones(batch_size, seq_length + past_key_values_length)
        if token_type_ids is None:
            if hasattr(self.embeddings, "token_type_ids"):
                buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length]
                buffered_token_type_ids_expanded = ops.broadcast_to(buffered_token_type_ids, (batch_size, seq_length))
                token_type_ids = buffered_token_type_ids_expanded
            else:
                token_type_ids = ops.zeros(*input_shape, dtype=mindspore.int64)

        # in order to use block_sparse attention, sequence_length has to be at least
        # bigger than all global attentions: 2 * block_size
        # + sliding tokens: 3 * block_size
        # + random tokens: 2 * num_random_blocks * block_size
        max_tokens_to_attend = (5 + 2 * self.config.num_random_blocks) * self.config.block_size
        if self.attention_type == "block_sparse" and seq_length <= max_tokens_to_attend:
            # change attention_type from block_sparse to original_full
            sequence_length = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]
            logger.warning(
                "Attention type 'block_sparse' is not possible if sequence_length: "
                f"{sequence_length} <= num global tokens: 2 * config.block_size "
                "+ min. num sliding tokens: 3 * config.block_size "
                "+ config.num_random_blocks * config.block_size "
                "+ additional buffer: config.num_random_blocks * config.block_size "
                f"= {max_tokens_to_attend} with config.block_size "
                f"= {self.config.block_size}, config.num_random_blocks "
                f"= {self.config.num_random_blocks}. "
                "Changing attention type to 'original_full'..."
            )
            self.set_attention_type("original_full")

        if self.attention_type == "block_sparse":
            (
                padding_len,
                input_ids,
                attention_mask,
                token_type_ids,
                position_ids,
                inputs_embeds,
            ) = self._pad_to_block_size(
                input_ids=input_ids,
                attention_mask=attention_mask,
                token_type_ids=token_type_ids,
                position_ids=position_ids,
                inputs_embeds=inputs_embeds,
                pad_token_id=self.config.pad_token_id,
            )
        else:
            padding_len = 0

        if self.attention_type == "block_sparse":
            blocked_encoder_mask, band_mask, from_mask, to_mask = self.create_masks_for_block_sparse_attn(
                attention_mask, self.block_size
            )
            extended_attention_mask = None

        elif self.attention_type == "original_full":
            blocked_encoder_mask = None
            band_mask = None
            from_mask = None
            to_mask = None
            # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
            # ourselves in which case we just need to make it broadcastable to all heads.
            extended_attention_mask: mindspore.Tensor = self.get_extended_attention_mask(attention_mask, input_shape)
        else:
            raise ValueError(
                f"attention_type can either be original_full or block_sparse, but is {self.attention_type}"
            )

        # If a 2D or 3D attention mask is provided for the cross-attention
        # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
        if self.config.is_decoder and encoder_hidden_states is not None:
            encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.shape
            encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
            if encoder_attention_mask is None:
                encoder_attention_mask = ops.ones(*encoder_hidden_shape)
            encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
        else:
            encoder_extended_attention_mask = None

        # Prepare head mask if needed
        # 1.0 in head_mask indicate we keep the head
        # attention_probs has shape bsz x n_heads x N x N
        # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
        # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
        head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)

        embedding_output = self.embeddings(
            input_ids=input_ids,
            position_ids=position_ids,
            token_type_ids=token_type_ids,
            inputs_embeds=inputs_embeds,
            past_key_values_length=past_key_values_length,
        )

        encoder_outputs = self.encoder(
            embedding_output,
            attention_mask=extended_attention_mask,
            head_mask=head_mask,
            encoder_hidden_states=encoder_hidden_states,
            encoder_attention_mask=encoder_extended_attention_mask,
            past_key_values=past_key_values,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            band_mask=band_mask,
            from_mask=from_mask,
            to_mask=to_mask,
            blocked_encoder_mask=blocked_encoder_mask,
            return_dict=return_dict,
        )
        sequence_output = encoder_outputs[0]

        pooler_output = self.activation(self.pooler(sequence_output[:, 0, :])) if (self.pooler is not None) else None

        # undo padding
        if padding_len > 0:
            # unpad `sequence_output` because the calling function is expecting a length == input_ids.shape[1]
            sequence_output = sequence_output[:, :-padding_len]

        if not return_dict:
            return (sequence_output, pooler_output) + encoder_outputs[1:]

        return BaseModelOutputWithPoolingAndCrossAttentions(
            last_hidden_state=sequence_output,
            pooler_output=pooler_output,
            past_key_values=encoder_outputs.past_key_values,
            hidden_states=encoder_outputs.hidden_states,
            attentions=encoder_outputs.attentions,
            cross_attentions=encoder_outputs.cross_attentions,
        )

    @staticmethod
    def create_masks_for_block_sparse_attn(attention_mask: mindspore.Tensor, block_size: int):
        """
        Creates masks for block sparse attention in the BigBirdModel class.

        Args:
            attention_mask (mindspore.Tensor): A 2D tensor representing the attention mask.
                Shape: [batch_size, seq_length].
            block_size (int): The size of each attention block.

        Returns:
            tuple:
                A tuple containing the following four tensors:

                - blocked_encoder_mask (mindspore.Tensor): A 3D tensor representing the attention mask in blocked format.
                Shape: [batch_size, seq_length // block_size, block_size].
                - band_mask (mindspore.Tensor): A 5D tensor representing the band mask for block sparse attention.
                Shape: [batch_size, 1, seq_length // block_size - 4, block_size, 3 * block_size].
                - from_mask (mindspore.Tensor): A 4D tensor representing the attention mask for the "from" sequence.
                Shape: [batch_size, 1, seq_length, 1].
                - to_mask (mindspore.Tensor): A 4D tensor representing the attention mask for the "to" sequence.
                Shape: [batch_size, 1, 1, seq_length].

        Raises:
            ValueError: If the sequence length is not a multiple of the block size.

        """
        batch_size, seq_length = attention_mask.shape
        if seq_length % block_size != 0:
            raise ValueError(
                f"Sequence length must be multiple of block size, but sequence length is {seq_length}, while block"
                f" size is {block_size}."
            )

        def create_band_mask_from_inputs(from_blocked_mask, to_blocked_mask):
            """
            Create 3D attention mask from a 2D tensor mask.

            Args:
                from_blocked_mask: 2D Tensor of shape [batch_size, from_seq_length//from_block_size, from_block_size].
                to_blocked_mask: int32 Tensor of shape [batch_size, to_seq_length//to_block_size, to_block_size].

            Returns:
                float Tensor of shape [batch_size, 1, from_seq_length//from_block_size-4, from_block_size,
                3*to_block_size].
            """
            exp_blocked_to_pad = ops.cat(
                [to_blocked_mask[:, 1:-3], to_blocked_mask[:, 2:-2], to_blocked_mask[:, 3:-1]], dim=2
            )
            band_mask = ops.einsum("blq,blk->blqk", from_blocked_mask[:, 2:-2], exp_blocked_to_pad)
            band_mask = band_mask.unsqueeze(1)
            return band_mask

        blocked_encoder_mask = attention_mask.view(batch_size, seq_length // block_size, block_size)
        band_mask = create_band_mask_from_inputs(blocked_encoder_mask, blocked_encoder_mask)

        from_mask = attention_mask.view(batch_size, 1, seq_length, 1)
        to_mask = attention_mask.view(batch_size, 1, 1, seq_length)

        return blocked_encoder_mask, band_mask, from_mask, to_mask

    def _pad_to_block_size(
        self,
        input_ids: mindspore.Tensor,
        attention_mask: mindspore.Tensor,
        token_type_ids: mindspore.Tensor,
        position_ids: mindspore.Tensor,
        inputs_embeds: mindspore.Tensor,
        pad_token_id: int,
    ):
        """A helper function to pad tokens and mask to work with implementation of BigBird block-sparse attention."""
        # padding
        block_size = self.config.block_size

        input_shape = input_ids.shape if input_ids is not None else inputs_embeds.shape
        batch_size, seq_len = input_shape[:2]

        padding_len = (block_size - seq_len % block_size) % block_size
        if padding_len > 0:
            logger.warning_once(
                f"Input ids are automatically padded from {seq_len} to {seq_len + padding_len} to be a multiple of "
                f"`config.block_size`: {block_size}"
            )
            if input_ids is not None:
                input_ids = ops.pad(input_ids, (0, padding_len), value=pad_token_id)
            if position_ids is not None:
                # pad with position_id = pad_token_id as in modeling_bigbird.BigBirdEmbeddings
                position_ids = ops.pad(position_ids, (0, padding_len), value=pad_token_id)
            if inputs_embeds is not None:
                input_ids_padding = inputs_embeds.new_full(
                    (batch_size, padding_len),
                    self.config.pad_token_id,
                    dtype=mindspore.int64,
                )
                inputs_embeds_padding = self.embeddings(input_ids_padding)
                inputs_embeds = ops.cat([inputs_embeds, inputs_embeds_padding], dim=-2)

            attention_mask = ops.pad(
                attention_mask, (0, padding_len), value=False
            )  # no attention on the padding tokens
            token_type_ids = ops.pad(token_type_ids, (0, padding_len), value=0)  # pad with token_type_id = 0

        return padding_len, input_ids, attention_mask, token_type_ids, position_ids, inputs_embeds

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdModel.__init__(config, add_pooling_layer=True)

Initializes a new instance of the BigBirdModel class.

PARAMETER DESCRIPTION
self

The instance of the class.

TYPE: BigBirdModel

config

The configuration object containing model parameters.

TYPE: object

add_pooling_layer

Flag to indicate whether to add a pooling layer.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
def __init__(self, config, add_pooling_layer=True):
    """
    Initializes a new instance of the BigBirdModel class.

    Args:
        self (BigBirdModel): The instance of the class.
        config (object): The configuration object containing model parameters.
        add_pooling_layer (bool): Flag to indicate whether to add a pooling layer.

    Returns:
        None.

    Raises:
        None
    """
    super().__init__(config)
    self.attention_type = self.config.attention_type
    self.config = config

    self.block_size = self.config.block_size

    self.embeddings = BigBirdEmbeddings(config)
    self.encoder = BigBirdEncoder(config)

    if add_pooling_layer:
        self.pooler = nn.Linear(config.hidden_size, config.hidden_size)
        self.activation = nn.Tanh()
    else:
        self.pooler = None
        self.activation = None

    if self.attention_type != "original_full" and config.add_cross_attention:
        logger.warning(
            "When using `BigBirdForCausalLM` as decoder, then `attention_type` must be `original_full`. Setting"
            " `attention_type=original_full`"
        )
        self.set_attention_type("original_full")

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdModel.create_masks_for_block_sparse_attn(attention_mask, block_size) staticmethod

Creates masks for block sparse attention in the BigBirdModel class.

PARAMETER DESCRIPTION
attention_mask

A 2D tensor representing the attention mask. Shape: [batch_size, seq_length].

TYPE: Tensor

block_size

The size of each attention block.

TYPE: int

RETURNS DESCRIPTION
tuple

A tuple containing the following four tensors:

  • blocked_encoder_mask (mindspore.Tensor): A 3D tensor representing the attention mask in blocked format. Shape: [batch_size, seq_length // block_size, block_size].
  • band_mask (mindspore.Tensor): A 5D tensor representing the band mask for block sparse attention. Shape: [batch_size, 1, seq_length // block_size - 4, block_size, 3 * block_size].
  • from_mask (mindspore.Tensor): A 4D tensor representing the attention mask for the "from" sequence. Shape: [batch_size, 1, seq_length, 1].
  • to_mask (mindspore.Tensor): A 4D tensor representing the attention mask for the "to" sequence. Shape: [batch_size, 1, 1, seq_length].
RAISES DESCRIPTION
ValueError

If the sequence length is not a multiple of the block size.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
@staticmethod
def create_masks_for_block_sparse_attn(attention_mask: mindspore.Tensor, block_size: int):
    """
    Creates masks for block sparse attention in the BigBirdModel class.

    Args:
        attention_mask (mindspore.Tensor): A 2D tensor representing the attention mask.
            Shape: [batch_size, seq_length].
        block_size (int): The size of each attention block.

    Returns:
        tuple:
            A tuple containing the following four tensors:

            - blocked_encoder_mask (mindspore.Tensor): A 3D tensor representing the attention mask in blocked format.
            Shape: [batch_size, seq_length // block_size, block_size].
            - band_mask (mindspore.Tensor): A 5D tensor representing the band mask for block sparse attention.
            Shape: [batch_size, 1, seq_length // block_size - 4, block_size, 3 * block_size].
            - from_mask (mindspore.Tensor): A 4D tensor representing the attention mask for the "from" sequence.
            Shape: [batch_size, 1, seq_length, 1].
            - to_mask (mindspore.Tensor): A 4D tensor representing the attention mask for the "to" sequence.
            Shape: [batch_size, 1, 1, seq_length].

    Raises:
        ValueError: If the sequence length is not a multiple of the block size.

    """
    batch_size, seq_length = attention_mask.shape
    if seq_length % block_size != 0:
        raise ValueError(
            f"Sequence length must be multiple of block size, but sequence length is {seq_length}, while block"
            f" size is {block_size}."
        )

    def create_band_mask_from_inputs(from_blocked_mask, to_blocked_mask):
        """
        Create 3D attention mask from a 2D tensor mask.

        Args:
            from_blocked_mask: 2D Tensor of shape [batch_size, from_seq_length//from_block_size, from_block_size].
            to_blocked_mask: int32 Tensor of shape [batch_size, to_seq_length//to_block_size, to_block_size].

        Returns:
            float Tensor of shape [batch_size, 1, from_seq_length//from_block_size-4, from_block_size,
            3*to_block_size].
        """
        exp_blocked_to_pad = ops.cat(
            [to_blocked_mask[:, 1:-3], to_blocked_mask[:, 2:-2], to_blocked_mask[:, 3:-1]], dim=2
        )
        band_mask = ops.einsum("blq,blk->blqk", from_blocked_mask[:, 2:-2], exp_blocked_to_pad)
        band_mask = band_mask.unsqueeze(1)
        return band_mask

    blocked_encoder_mask = attention_mask.view(batch_size, seq_length // block_size, block_size)
    band_mask = create_band_mask_from_inputs(blocked_encoder_mask, blocked_encoder_mask)

    from_mask = attention_mask.view(batch_size, 1, seq_length, 1)
    to_mask = attention_mask.view(batch_size, 1, 1, seq_length)

    return blocked_encoder_mask, band_mask, from_mask, to_mask

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdModel.forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
encoder_hidden_states

Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

TYPE: (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional* DEFAULT: None

encoder_attention_mask

Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:

  • 1 for tokens that are not masked,
  • 0 for tokens that are masked.

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

use_cache

If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).

TYPE: `bool`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
def forward(
    self,
    input_ids: mindspore.Tensor = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    encoder_hidden_states: Optional[mindspore.Tensor] = None,
    encoder_attention_mask: Optional[mindspore.Tensor] = None,
    past_key_values: Optional[Tuple[Tuple[mindspore.Tensor]]] = None,
    use_cache: Optional[bool] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[BaseModelOutputWithPoolingAndCrossAttentions, Tuple[mindspore.Tensor]]:
    r"""
    Args:
        encoder_hidden_states  (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
            Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
            the model is configured as a decoder.
        encoder_attention_mask (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
            the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:

            - 1 for tokens that are **not masked**,
            - 0 for tokens that are **masked**.

        past_key_values (`tuple(tuple(mindspore.Tensor))` of length `config.n_layers` with each tuple having 4 tensors
            of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
            Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.

            If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
            don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
            `decoder_input_ids` of shape `(batch_size, sequence_length)`.
        use_cache (`bool`, *optional*):
            If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
            `past_key_values`).
    """
    output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
    output_hidden_states = (
        output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
    )
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    if self.config.is_decoder:
        use_cache = use_cache if use_cache is not None else self.config.use_cache
    else:
        use_cache = False

    if input_ids is not None and inputs_embeds is not None:
        raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
    if input_ids is not None:
        self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
        input_shape = input_ids.shape
    elif inputs_embeds is not None:
        input_shape = inputs_embeds.shape[:-1]
    else:
        raise ValueError("You have to specify either input_ids or inputs_embeds")

    batch_size, seq_length = input_shape

    # past_key_values_length
    past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0

    if attention_mask is None:
        attention_mask = ops.ones(batch_size, seq_length + past_key_values_length)
    if token_type_ids is None:
        if hasattr(self.embeddings, "token_type_ids"):
            buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length]
            buffered_token_type_ids_expanded = ops.broadcast_to(buffered_token_type_ids, (batch_size, seq_length))
            token_type_ids = buffered_token_type_ids_expanded
        else:
            token_type_ids = ops.zeros(*input_shape, dtype=mindspore.int64)

    # in order to use block_sparse attention, sequence_length has to be at least
    # bigger than all global attentions: 2 * block_size
    # + sliding tokens: 3 * block_size
    # + random tokens: 2 * num_random_blocks * block_size
    max_tokens_to_attend = (5 + 2 * self.config.num_random_blocks) * self.config.block_size
    if self.attention_type == "block_sparse" and seq_length <= max_tokens_to_attend:
        # change attention_type from block_sparse to original_full
        sequence_length = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]
        logger.warning(
            "Attention type 'block_sparse' is not possible if sequence_length: "
            f"{sequence_length} <= num global tokens: 2 * config.block_size "
            "+ min. num sliding tokens: 3 * config.block_size "
            "+ config.num_random_blocks * config.block_size "
            "+ additional buffer: config.num_random_blocks * config.block_size "
            f"= {max_tokens_to_attend} with config.block_size "
            f"= {self.config.block_size}, config.num_random_blocks "
            f"= {self.config.num_random_blocks}. "
            "Changing attention type to 'original_full'..."
        )
        self.set_attention_type("original_full")

    if self.attention_type == "block_sparse":
        (
            padding_len,
            input_ids,
            attention_mask,
            token_type_ids,
            position_ids,
            inputs_embeds,
        ) = self._pad_to_block_size(
            input_ids=input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            inputs_embeds=inputs_embeds,
            pad_token_id=self.config.pad_token_id,
        )
    else:
        padding_len = 0

    if self.attention_type == "block_sparse":
        blocked_encoder_mask, band_mask, from_mask, to_mask = self.create_masks_for_block_sparse_attn(
            attention_mask, self.block_size
        )
        extended_attention_mask = None

    elif self.attention_type == "original_full":
        blocked_encoder_mask = None
        band_mask = None
        from_mask = None
        to_mask = None
        # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
        # ourselves in which case we just need to make it broadcastable to all heads.
        extended_attention_mask: mindspore.Tensor = self.get_extended_attention_mask(attention_mask, input_shape)
    else:
        raise ValueError(
            f"attention_type can either be original_full or block_sparse, but is {self.attention_type}"
        )

    # If a 2D or 3D attention mask is provided for the cross-attention
    # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
    if self.config.is_decoder and encoder_hidden_states is not None:
        encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.shape
        encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
        if encoder_attention_mask is None:
            encoder_attention_mask = ops.ones(*encoder_hidden_shape)
        encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
    else:
        encoder_extended_attention_mask = None

    # Prepare head mask if needed
    # 1.0 in head_mask indicate we keep the head
    # attention_probs has shape bsz x n_heads x N x N
    # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
    # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
    head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)

    embedding_output = self.embeddings(
        input_ids=input_ids,
        position_ids=position_ids,
        token_type_ids=token_type_ids,
        inputs_embeds=inputs_embeds,
        past_key_values_length=past_key_values_length,
    )

    encoder_outputs = self.encoder(
        embedding_output,
        attention_mask=extended_attention_mask,
        head_mask=head_mask,
        encoder_hidden_states=encoder_hidden_states,
        encoder_attention_mask=encoder_extended_attention_mask,
        past_key_values=past_key_values,
        use_cache=use_cache,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        band_mask=band_mask,
        from_mask=from_mask,
        to_mask=to_mask,
        blocked_encoder_mask=blocked_encoder_mask,
        return_dict=return_dict,
    )
    sequence_output = encoder_outputs[0]

    pooler_output = self.activation(self.pooler(sequence_output[:, 0, :])) if (self.pooler is not None) else None

    # undo padding
    if padding_len > 0:
        # unpad `sequence_output` because the calling function is expecting a length == input_ids.shape[1]
        sequence_output = sequence_output[:, :-padding_len]

    if not return_dict:
        return (sequence_output, pooler_output) + encoder_outputs[1:]

    return BaseModelOutputWithPoolingAndCrossAttentions(
        last_hidden_state=sequence_output,
        pooler_output=pooler_output,
        past_key_values=encoder_outputs.past_key_values,
        hidden_states=encoder_outputs.hidden_states,
        attentions=encoder_outputs.attentions,
        cross_attentions=encoder_outputs.cross_attentions,
    )

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdModel.get_input_embeddings()

Method to retrieve the input embeddings from the BigBirdModel.

PARAMETER DESCRIPTION
self

An instance of the BigBirdModel class.

RETURNS DESCRIPTION
word_embeddings

The method returns the word embeddings stored in the BigBirdModel instance.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
def get_input_embeddings(self):
    """
    Method to retrieve the input embeddings from the BigBirdModel.

    Args:
        self: An instance of the BigBirdModel class.

    Returns:
        word_embeddings: The method returns the word embeddings stored in the BigBirdModel instance.

    Raises:
        word_embeddings
    """
    return self.embeddings.word_embeddings

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdModel.set_attention_type(value)

Method to set the attention type for the BigBirdModel.

PARAMETER DESCRIPTION
self

Instance of the BigBirdModel class.

value

The specified attention type to set. It can only be either 'original_full' or 'block_sparse'.

TYPE: str

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
ValueError

If the value provided is not 'original_full' or 'block_sparse'.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
def set_attention_type(self, value: str):
    """
    Method to set the attention type for the BigBirdModel.

    Args:
        self: Instance of the BigBirdModel class.
        value (str): The specified attention type to set. It can only be either 'original_full' or 'block_sparse'.

    Returns:
        None.

    Raises:
        ValueError: If the value provided is not 'original_full' or 'block_sparse'.
    """
    if value not in ["original_full", "block_sparse"]:
        raise ValueError(
            f"attention_type can only be set to either 'original_full' or 'block_sparse', but is {value}"
        )
    # attention type is already correctly set
    if value == self.attention_type:
        return
    self.attention_type = value
    self.encoder.set_attention_type(value)

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdModel.set_input_embeddings(value)

Set the input embeddings of the BigBirdModel.

PARAMETER DESCRIPTION
self

An instance of the BigBirdModel class.

TYPE: BigBirdModel

value

The new input embeddings to be set. This should be an instance of a compatible embedding object.

RETURNS DESCRIPTION

None.

This method is used to update the input embeddings of the BigBirdModel with a new set of embeddings. It takes in the instance of the BigBirdModel class and the new embeddings to be set as parameters. The 'value' parameter should be an instance of a compatible embedding object.

Note that changing the input embeddings can have a significant impact on the model's performance, s o it should be done carefully and with consideration of the specific task and data being used.

This method does not return any value, as it directly modifies the input embeddings of the BigBirdModel instance.

Example
>>> model = BigBirdModel()
>>> embeddings = WordEmbeddings()
>>> model.set_input_embeddings(embeddings)
Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
def set_input_embeddings(self, value):
    """
    Set the input embeddings of the BigBirdModel.

    Args:
        self (BigBirdModel): An instance of the BigBirdModel class.
        value: The new input embeddings to be set. This should be an instance of a compatible embedding object.

    Returns:
        None.

    Raises:
        None.

    This method is used to update the input embeddings of the BigBirdModel with a new set of embeddings.
    It takes in the instance of the BigBirdModel class and the new embeddings to be set as parameters.
    The 'value' parameter should be an instance of a compatible embedding object.

    Note that changing the input embeddings can have a significant impact on the model's performance, s
    o it should be done carefully and with consideration of the specific task and data being used.

    This method does not return any value, as it directly modifies the input embeddings of the BigBirdModel instance.

    Example:
        ```python
        >>> model = BigBirdModel()
        >>> embeddings = WordEmbeddings()
        >>> model.set_input_embeddings(embeddings)
        ```
    """
    self.embeddings.word_embeddings = value

mindnlp.transformers.models.big_bird.modeling_big_bird.BigBirdPreTrainedModel

Bases: PreTrainedModel

An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.

Source code in mindnlp/transformers/models/big_bird/modeling_big_bird.py
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
class BigBirdPreTrainedModel(PreTrainedModel):
    """
    An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
    models.
    """
    config_class = BigBirdConfig
    base_model_prefix = "bert"
    supports_gradient_checkpointing = True

    def _init_weights(self, cell):
        """Initialize the weights"""
        if isinstance(cell, nn.Linear):
            # Slightly different from the TF version which uses truncated_normal for initialization
            # cf https://github.com/pytorch/pytorch/pull/5617
            cell.weight.set_data(initializer(Normal(self.config.initializer_range),
                                                    cell.weight.shape, cell.weight.dtype))
            if cell.bias is not None:
                cell.bias.set_data(initializer('zeros', cell.bias.shape, cell.bias.dtype))
        elif isinstance(cell, nn.Embedding):
            weight = np.random.normal(0.0, self.config.initializer_range, cell.weight.shape)
            if cell.padding_idx:
                weight[cell.padding_idx] = 0

            cell.weight.set_data(Tensor(weight, cell.weight.dtype))
        elif isinstance(cell, nn.LayerNorm):
            cell.weight.set_data(initializer('ones', cell.weight.shape, cell.weight.dtype))
            cell.bias.set_data(initializer('zeros', cell.bias.shape, cell.bias.dtype))

mindnlp.transformers.models.big_bird.tokenization_big_bird.BigBirdTokenizer

Bases: PreTrainedTokenizer

Construct a BigBird tokenizer. Based on SentencePiece.

This tokenizer inherits from [PreTrainedTokenizer] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

PARAMETER DESCRIPTION
vocab_file

SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer.

TYPE: `str`

unk_token

The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

TYPE: `str`, *optional*, defaults to `"<unk>"` DEFAULT: '<unk>'

bos_token

The begin of sequence token.

TYPE: `str`, *optional*, defaults to `"<s>"` DEFAULT: '<s>'

eos_token

The end of sequence token.

TYPE: `str`, *optional*, defaults to `"</s>"` DEFAULT: '</s>'

pad_token

The token used for padding, for example when batching sequences of different lengths.

TYPE: `str`, *optional*, defaults to `"<pad>"` DEFAULT: '<pad>'

sep_token

The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.

TYPE: `str`, *optional*, defaults to `"[SEP]"` DEFAULT: '[SEP]'

mask_token

The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.

TYPE: `str`, *optional*, defaults to `"[MASK]"` DEFAULT: '[MASK]'

cls_token

The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.

TYPE: `str`, *optional*, defaults to `"[CLS]"` DEFAULT: '[CLS]'

sp_model_kwargs

Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set:

  • enable_sampling: Enable subword regularization.
  • nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.

    • nbest_size = {0,1}: No sampling is performed.
    • nbest_size > 1: samples from the nbest_size results.
    • nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.
    • alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.

TYPE: `dict`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/big_bird/tokenization_big_bird.py
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
class BigBirdTokenizer(PreTrainedTokenizer):
    """
    Construct a BigBird tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).

    This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
    this superclass for more information regarding those methods.

    Args:
        vocab_file (`str`):
            [SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
            contains the vocabulary necessary to instantiate a tokenizer.
        unk_token (`str`, *optional*, defaults to `"<unk>"`):
            The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
            token instead.
        bos_token (`str`, *optional*, defaults to `"<s>"`):
            The begin of sequence token.
        eos_token (`str`, *optional*, defaults to `"</s>"`):
            The end of sequence token.
        pad_token (`str`, *optional*, defaults to `"<pad>"`):
            The token used for padding, for example when batching sequences of different lengths.
        sep_token (`str`, *optional*, defaults to `"[SEP]"`):
            The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
            sequence classification or for a text and a question for question answering. It is also used as the last
            token of a sequence built with special tokens.
        mask_token (`str`, *optional*, defaults to `"[MASK]"`):
            The token used for masking values. This is the token used when training this model with masked language
            modeling. This is the token which the model will try to predict.
        cls_token (`str`, *optional*, defaults to `"[CLS]"`):
            The classifier token which is used when doing sequence classification (classification of the whole sequence
            instead of per-token classification). It is the first token of the sequence when built with special tokens.
        sp_model_kwargs (`dict`, *optional*):
            Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
            SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
            to set:

            - `enable_sampling`: Enable subword regularization.
            - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.

                - `nbest_size = {0,1}`: No sampling is performed.
                - `nbest_size > 1`: samples from the nbest_size results.
                - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
                using forward-filtering-and-backward-sampling algorithm.
            - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
            BPE-dropout.
    """
    vocab_files_names = VOCAB_FILES_NAMES
    pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
    max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
    model_input_names = ["input_ids", "attention_mask"]
    prefix_tokens: List[int] = []

    def __init__(
        self,
        vocab_file,
        unk_token="<unk>",
        bos_token="<s>",
        eos_token="</s>",
        pad_token="<pad>",
        sep_token="[SEP]",
        mask_token="[MASK]",
        cls_token="[CLS]",
        sp_model_kwargs: Optional[Dict[str, Any]] = None,
        **kwargs,
    ) -> None:
        """
        Initializes an instance of the BigBirdTokenizer class.

        Args:
            self: The instance of the BigBirdTokenizer class.
            vocab_file (str): Path to the vocabulary file.
            unk_token (str, optional): The token representing unknown words. Defaults to '<unk>'.
            bos_token (str, optional): The token representing the beginning of a sentence. Defaults to '<s>'.
            eos_token (str, optional): The token representing the end of a sentence. Defaults to '</s>'.
            pad_token (str, optional): The token representing padding. Defaults to '<pad>'.
            sep_token (str, optional): The token representing sentence separation. Defaults to '[SEP]'.
            mask_token (str, optional): The token representing masked words. Defaults to '[MASK]'.
            cls_token (str, optional): The token representing classification. Defaults to '[CLS]'.
            sp_model_kwargs (Optional[Dict[str, Any]], optional): Additional arguments for the SentencePieceProcessor. Defaults to None.
            **kwargs: Additional keyword arguments.

        Returns:
            None.

        Raises:
            None.
        """
        bos_token = (
            AddedToken(bos_token, lstrip=False, rstrip=False)
            if isinstance(bos_token, str)
            else bos_token
        )
        eos_token = (
            AddedToken(eos_token, lstrip=False, rstrip=False)
            if isinstance(eos_token, str)
            else eos_token
        )
        unk_token = (
            AddedToken(unk_token, lstrip=False, rstrip=False)
            if isinstance(unk_token, str)
            else unk_token
        )
        pad_token = (
            AddedToken(pad_token, lstrip=False, rstrip=False)
            if isinstance(pad_token, str)
            else pad_token
        )
        cls_token = (
            AddedToken(cls_token, lstrip=False, rstrip=False)
            if isinstance(cls_token, str)
            else cls_token
        )
        sep_token = (
            AddedToken(sep_token, lstrip=False, rstrip=False)
            if isinstance(sep_token, str)
            else sep_token
        )

        # Mask token behave like a normal word, i.e. include the space before it
        mask_token = (
            AddedToken(mask_token, lstrip=True, rstrip=False)
            if isinstance(mask_token, str)
            else mask_token
        )

        self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs

        self.vocab_file = vocab_file

        self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
        self.sp_model.Load(vocab_file)

        super().__init__(
            bos_token=bos_token,
            eos_token=eos_token,
            unk_token=unk_token,
            pad_token=pad_token,
            sep_token=sep_token,
            mask_token=mask_token,
            cls_token=cls_token,
            sp_model_kwargs=self.sp_model_kwargs,
            **kwargs,
        )

    @property
    def vocab_size(self):
        """
        Method to retrieve the vocabulary size of the BigBirdTokenizer.

        Args:
            self (BigBirdTokenizer): The instance of the BigBirdTokenizer class.
                This parameter is required to access the tokenizer's properties.

        Returns:
            None: The method returns the vocabulary size as an integer value.

        Raises:
            None.
        """
        return self.sp_model.get_piece_size()

    def get_vocab(self):
        """
        This method returns the vocabulary for the BigBirdTokenizer.

        Args:
            self (BigBirdTokenizer): The instance of the BigBirdTokenizer class.

        Returns:
            dict: A dictionary containing the vocabulary, where keys are tokens and values are their corresponding ids.

        Raises:
            None
        """
        vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
        vocab.update(self.added_tokens_encoder)
        return vocab

    def __getstate__(self):
        """
        The '__getstate__' method in the 'BigBirdTokenizer' class is used to retrieve the current state of the object
        for serialization. This method takes one parameter, 'self', which refers to the instance of
        the 'BigBirdTokenizer' class.

        Args:
            self (BigBirdTokenizer): The instance of the 'BigBirdTokenizer' class.

        Returns:
            None.

        Raises:
            None.
        """
        state = self.__dict__.copy()
        state["sp_model"] = None
        return state

    def __setstate__(self, d):
        """
        Sets the state of the BigBirdTokenizer object based on the provided dictionary.

        Args:
            self (BigBirdTokenizer): The instance of the BigBirdTokenizer class.
            d (dict): The dictionary containing the state information.

        Returns:
            None

        Raises:
            None

        This method sets the state of the BigBirdTokenizer object by assigning the dictionary 'd' to the '__dict__' attribute of the instance.
        If the instance does not have the 'sp_model_kwargs' attribute, it is initialized as an empty dictionary.
        The SentencePieceProcessor object 'sp_model' is then created and assigned to the 'sp_model' attribute of the instance.
        The 'sp_model_kwargs' dictionary is used to pass any additional keyword arguments to the SentencePieceProcessor initialization.
        Finally, the vocabulary file is loaded using the 'Load' method of the 'sp_model' object.
        """
        self.__dict__ = d

        # for backward compatibility
        if not hasattr(self, "sp_model_kwargs"):
            self.sp_model_kwargs = {}

        self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
        self.sp_model.Load(self.vocab_file)

    def _tokenize(self, text: str) -> List[str]:
        """Take as input a string and return a list of strings (tokens) for words/sub-words"""
        return self.sp_model.encode(text, out_type=str)

    def _convert_token_to_id(self, token):
        """Converts a token (str) in an id using the vocab."""
        return self.sp_model.piece_to_id(token)

    def _convert_id_to_token(self, index):
        """Converts an index (integer) in a token (str) using the vocab."""
        token = self.sp_model.IdToPiece(index)
        return token

    # Copied from transformers.models.albert.tokenization_albert.AlbertTokenizer.convert_tokens_to_string
    def convert_tokens_to_string(self, tokens):
        """Converts a sequence of tokens (string) in a single string."""
        current_sub_tokens = []
        out_string = ""
        prev_is_special = False
        for token in tokens:
            # make sure that special tokens are not decoded using sentencepiece model
            if token in self.all_special_tokens:
                if not prev_is_special:
                    out_string += " "
                out_string += self.sp_model.decode(current_sub_tokens) + token
                prev_is_special = True
                current_sub_tokens = []
            else:
                current_sub_tokens.append(token)
                prev_is_special = False
        out_string += self.sp_model.decode(current_sub_tokens)
        return out_string.strip()

    def _decode(
        self,
        token_ids: List[int],
        skip_special_tokens: bool = False,
        clean_up_tokenization_spaces: bool = None,
        spaces_between_special_tokens: bool = True,
        **kwargs,
    ) -> str:
        """
        Decode the token IDs into a human-readable string.

        Args:
            self: The BigBirdTokenizer instance.
            token_ids (List[int]): A list of token IDs to be decoded into a string.
            skip_special_tokens (bool, optional): Whether to skip special tokens during decoding. Defaults to False.
            clean_up_tokenization_spaces (bool, optional): Whether to clean up tokenization spaces in the decoded text.
                Defaults to None.
            spaces_between_special_tokens (bool, optional):
                Whether to include spaces between special tokens in the decoded text. Defaults to True.

        Returns:
            str: The decoded string representation of the input token IDs.

        Raises:
            None.
            """
        self._decode_use_source_tokenizer = kwargs.pop("use_source_tokenizer", False)

        filtered_tokens = self.convert_ids_to_tokens(
            token_ids, skip_special_tokens=skip_special_tokens
        )

        # To avoid mixing byte-level and unicode for byte-level BPT
        # we need to build string separately for added tokens and byte-level tokens
        # cf. https://github.com/huggingface/transformers/issues/1133
        sub_texts = []
        current_sub_text = []
        for token in filtered_tokens:
            if skip_special_tokens and token in self.all_special_ids:
                continue
            if token in self.added_tokens_encoder:
                if current_sub_text:
                    sub_texts.append(self.convert_tokens_to_string(current_sub_text))
                    current_sub_text = []
                sub_texts.append(token)
            else:
                current_sub_text.append(token)
        if current_sub_text:
            sub_texts.append(self.convert_tokens_to_string(current_sub_text))

        # Mimic the behavior of the Rust tokenizer:
        # No space before [MASK] and [SEP]
        if spaces_between_special_tokens:
            text = re.sub(r" (\[(MASK|SEP)\])", r"\1", " ".join(sub_texts))
        else:
            text = "".join(sub_texts)

        clean_up_tokenization_spaces = (
            clean_up_tokenization_spaces
            if clean_up_tokenization_spaces is not None
            else self.clean_up_tokenization_spaces
        )
        if clean_up_tokenization_spaces:
            clean_text = self.clean_up_tokenization(text)
            return clean_text
        return text

    def save_vocabulary(
        self, save_directory: str, filename_prefix: Optional[str] = None
    ) -> Tuple[str]:
        '''
        Save the vocabulary to a specified directory with an optional filename prefix.

        Args:
            self (BigBirdTokenizer): The instance of the BigBirdTokenizer class.
            save_directory (str): The directory where the vocabulary will be saved.
            filename_prefix (Optional[str]): An optional prefix to be added to the filename of the vocabulary. Defaults to None.

        Returns:
            Tuple[str]: A tuple containing the path to the saved vocabulary file.

        Raises:
            OSError: If the save_directory is not a valid directory.
            IOError: If the vocabulary file cannot be copied or written to the specified location.
        '''
        if not os.path.isdir(save_directory):
            logger.error(f"Vocabulary path ({save_directory}) should be a directory")
            return
        out_vocab_file = os.path.join(
            save_directory,
            (filename_prefix + "-" if filename_prefix else "")
            + VOCAB_FILES_NAMES["vocab_file"],
        )

        if os.path.abspath(self.vocab_file) != os.path.abspath(
            out_vocab_file
        ) and os.path.isfile(self.vocab_file):
            copyfile(self.vocab_file, out_vocab_file)
        elif not os.path.isfile(self.vocab_file):
            with open(out_vocab_file, "wb") as fi:
                content_spiece_model = self.sp_model.serialized_model_proto()
                fi.write(content_spiece_model)

        return (out_vocab_file,)

    def build_inputs_with_special_tokens(
        self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
    ) -> List[int]:
        """
        Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
        adding special tokens. A Big Bird sequence has the following format:

        - single sequence: `[CLS] X [SEP]`
        - pair of sequences: `[CLS] A [SEP] B [SEP]`

        Args:
            token_ids_0 (`List[int]`):
                List of IDs to which the special tokens will be added.
            token_ids_1 (`List[int]`, *optional*):
                Optional second list of IDs for sequence pairs.

        Returns:
            `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
        """
        if token_ids_1 is None:
            return [self.cls_token_id] + token_ids_0 + [self.sep_token_id]
        cls = [self.cls_token_id]
        sep = [self.sep_token_id]
        return cls + token_ids_0 + sep + token_ids_1 + sep

    def get_special_tokens_mask(
        self,
        token_ids_0: List[int],
        token_ids_1: Optional[List[int]] = None,
        already_has_special_tokens: bool = False,
    ) -> List[int]:
        """
        Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
        special tokens using the tokenizer `prepare_for_model` method.

        Args:
            token_ids_0 (`List[int]`):
                List of IDs.
            token_ids_1 (`List[int]`, *optional*):
                Optional second list of IDs for sequence pairs.
            already_has_special_tokens (`bool`, *optional*, defaults to `False`):
                Whether or not the token list is already formatted with special tokens for the model.

        Returns:
            `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
        """
        if already_has_special_tokens:
            return super().get_special_tokens_mask(
                token_ids_0=token_ids_0,
                token_ids_1=token_ids_1,
                already_has_special_tokens=True,
            )

        if token_ids_1 is None:
            return [1] + ([0] * len(token_ids_0)) + [1]
        return [1] + ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1)) + [1]

    def create_token_type_ids_from_sequences(
        self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
    ) -> List[int]:
        """
        Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
        pair mask has the following format:
        ```0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence |```

        If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s).

        Args:
            token_ids_0 (`List[int]`):
                List of IDs.
            token_ids_1 (`List[int]`, *optional*):
                Optional second list of IDs for sequence pairs.

        Returns:
            `List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s).
        """
        sep = [self.sep_token_id]
        cls = [self.cls_token_id]
        if token_ids_1 is None:
            return len(cls + token_ids_0 + sep) * [0]
        return len(cls + token_ids_0 + sep) * [0] + len(token_ids_1 + sep) * [1]

mindnlp.transformers.models.big_bird.tokenization_big_bird.BigBirdTokenizer.vocab_size property

Method to retrieve the vocabulary size of the BigBirdTokenizer.

PARAMETER DESCRIPTION
self

The instance of the BigBirdTokenizer class. This parameter is required to access the tokenizer's properties.

TYPE: BigBirdTokenizer

RETURNS DESCRIPTION
None

The method returns the vocabulary size as an integer value.

mindnlp.transformers.models.big_bird.tokenization_big_bird.BigBirdTokenizer.__getstate__()

The 'getstate' method in the 'BigBirdTokenizer' class is used to retrieve the current state of the object for serialization. This method takes one parameter, 'self', which refers to the instance of the 'BigBirdTokenizer' class.

PARAMETER DESCRIPTION
self

The instance of the 'BigBirdTokenizer' class.

TYPE: BigBirdTokenizer

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/big_bird/tokenization_big_bird.py
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242