mamba
mindnlp.transformers.models.mamba.modeling_mamba
¶
MindSpore MAMBA model.
mindnlp.transformers.models.mamba.modeling_mamba.MambaBlock
¶
Bases: Module
The MambaBlock class represents a block used in the Mamba neural network model for processing hidden states. This class inherits from nn.Module and contains methods for initializing the block and forwarding the block's operations.
ATTRIBUTE | DESCRIPTION |
---|---|
config |
A dictionary containing configuration parameters for the block.
|
layer_idx |
An integer representing the index of the layer within the neural network.
|
residual_in_fp32 |
A boolean indicating whether the residual input is in float32 format.
|
norm |
An instance of MambaRMSNorm for performing layer normalization on hidden states.
|
mixer |
An instance of MambaMixer for mixing the normalized hidden states.
|
METHOD | DESCRIPTION |
---|---|
__init__ |
Initializes the MambaBlock instance with the provided configuration and layer index. |
forward |
Constructs the block by processing hidden states through normalization, mixing, and addition of residuals. |
Example
>>> # Example of initializing and using the MambaBlock class
>>> config = {'hidden_size': 512, 'layer_norm_epsilon': 1e-5, 'residual_in_fp32': True}
>>> block = MambaBlock(config, layer_idx=1)
>>> output = block.forward(hidden_states)
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaBlock.__init__(config, layer_idx)
¶
Initializes a MambaBlock instance.
PARAMETER | DESCRIPTION |
---|---|
self |
The MambaBlock instance itself.
TYPE:
|
config |
An object containing configuration settings for the block.
TYPE:
|
layer_idx |
Index of the layer within the block.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaBlock.forward(hidden_states, cache_params=None)
¶
This method forwards a MambaBlock by performing a series of operations on the input hidden_states.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MambaBlock class.
|
hidden_states |
A tensor representing the input hidden states. It is the main input to the forward method.
|
cache_params |
Optional parameter. A dictionary containing cache parameters. Default is None.
DEFAULT:
|
RETURNS | DESCRIPTION |
---|---|
None
|
This method does not return any value directly, but it updates the hidden_states tensor in place. |
RAISES | DESCRIPTION |
---|---|
TypeError
|
If the hidden_states parameter is not a valid tensor. |
ValueError
|
If the cache_params parameter is provided but is not a valid dictionary. |
RuntimeError
|
If there is a runtime error during the execution of the method. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaCache
¶
MambaCache class represents a cache for storing intermediate states during Mamba model training.
This class inherits from [parent class].
ATTRIBUTE | DESCRIPTION |
---|---|
seqlen_offset |
The offset for sequence length.
TYPE:
|
dtype |
The data type used for the cache.
TYPE:
|
conv_states |
A dictionary storing convolutional states for each hidden layer.
TYPE:
|
ssm_states |
A dictionary storing SSM (Spatio-spectral modulation) states for each hidden layer.
TYPE:
|
PARAMETER | DESCRIPTION |
---|---|
config |
The configuration for the Mamba model.
|
batch_size |
The size of the input batch.
TYPE:
|
dtype |
The data type to be used, default is mindspore.float16.
DEFAULT:
|
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaCache.__init__(config, batch_size, dtype=mindspore.float16)
¶
Initialize the MambaCache class.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the class.
|
config |
Configuration object containing model parameters.
TYPE:
|
batch_size |
The size of the input batch.
TYPE:
|
dtype |
Data type for the tensors (default: mindspore.float16).
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaCausalLMOutput
dataclass
¶
Bases: ModelOutput
Base class for causal language model (or autoregressive) outputs.
PARAMETER | DESCRIPTION |
---|---|
loss |
Language modeling loss (for next-token prediction).
TYPE:
|
logits |
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
TYPE:
|
cache_params |
The state of the model at the last time step. Can be used in a forward method with the next
TYPE:
|
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaForCausalLM
¶
Bases: MambaPreTrainedModel
This class represents a Mamba model for Causal Language Modeling (LM), which is a subclass of MambaPreTrainedModel.
The class includes methods for initializing the model, getting and setting the output embeddings, getting and setting the input embeddings, updating model keyword arguments for generation, preparing inputs for generation, and forwarding the model for LM tasks.
The 'forward' method takes input_ids, inputs_embeds, cache_params, labels, output_hidden_states, and return_dict as input parameters, and returns the model output for Causal LM tasks. It calculates the loss if labels are provided and returns the loss along with the logits and other relevant outputs.
The class also provides functionality to handle cache_params, hidden states, and embedding tensors during the model's execution for LM tasks.
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaForCausalLM.__init__(config)
¶
Initializes the MambaForCausalLM class.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MambaForCausalLM class.
|
config |
An object containing the configuration settings for the model.
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaForCausalLM.forward(input_ids=None, inputs_embeds=None, cache_params=None, labels=None, output_hidden_states=None, return_dict=None, **kwargs)
¶
PARAMETER | DESCRIPTION |
---|---|
labels |
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
TYPE:
|
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaForCausalLM.get_input_embeddings()
¶
This method retrieves the input embeddings from the MambaForCausalLM model's backbone.
PARAMETER | DESCRIPTION |
---|---|
self |
The MambaForCausalLM instance itself.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaForCausalLM.get_output_embeddings()
¶
Returns the output embeddings for the MambaForCausalLM model.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MambaForCausalLM class.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
lm_head
|
This method returns the 'lm_head' attribute of the MambaForCausalLM instance, which represents the output embeddings. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaForCausalLM.prepare_inputs_for_generation(input_ids, cache_params=None, inputs_embeds=None, **kwargs)
¶
This method prepares inputs for text generation in the MambaForCausalLM class.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MambaForCausalLM class.
TYPE:
|
input_ids |
The input tensor containing token indices of the input sequence.
TYPE:
|
cache_params |
A dictionary containing parameters for caching computations.
TYPE:
|
inputs_embeds |
The embeddings of the input tokens, if provided.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
dict
|
A dictionary containing either 'input_ids' or 'inputs_embeds' based on the conditions specified in the method. Additionally, 'cache_params' is included in the dictionary. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaForCausalLM.set_input_embeddings(new_embeddings)
¶
Sets the input embeddings for the MambaForCausalLM model.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MambaForCausalLM class.
TYPE:
|
new_embeddings |
The new input embeddings to be set for the model. It can be of any valid type that is compatible with the model's input requirements.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
RAISES | DESCRIPTION |
---|---|
TypeError
|
If the new_embeddings parameter is of an incompatible type. |
ValueError
|
If the new_embeddings parameter does not meet the required criteria for input embeddings. |
RuntimeError
|
If an unexpected error occurs while setting the input embeddings. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaForCausalLM.set_output_embeddings(new_embeddings)
¶
Sets the output embeddings for the MambaForCausalLM model.
PARAMETER | DESCRIPTION |
---|---|
self |
An instance of the MambaForCausalLM class.
TYPE:
|
new_embeddings |
A tensor containing the new output embeddings to be set.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None
|
This method updates the output embeddings of the MambaForCausalLM model in-place. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaMixer
¶
Bases: Module
Compute ∆, A, B, C, and D the state space parameters and compute the contextualized_states
.
A, D are input independent (see Mamba paper [1] Section 3.5.2 "Interpretation of A" for why A isn't selective)
∆, B, C are input-dependent (this is a key difference between Mamba and the linear time invariant S4,
and is why Mamba is called selective state spaces)
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaMixer.__init__(config, layer_idx)
¶
Initializes a MambaMixer instance.
PARAMETER | DESCRIPTION |
---|---|
self |
The MambaMixer instance.
TYPE:
|
config |
A configuration object containing the following attributes:
|
layer_idx |
The index of the layer.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
RAISES | DESCRIPTION |
---|---|
ValueError
|
If any of the configuration attributes are invalid or missing. |
TypeError
|
If the input types are incorrect. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaMixer.forward(input_states, cache_params=None)
¶
Constructs contextualized states based on input states and cache parameters.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MambaMixer class.
TYPE:
|
input_states |
The input states with shape (batch_size, seq_len, _), where batch_size is the number of sequences, seq_len is the maximum sequence length, and _ is the dimension of the input feature.
TYPE:
|
cache_params |
The cache parameters containing states for caching computations, defaults to None.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
The contextualized states with shape (batch_size, seq_len, output_size), where output_size is the size of the output. |
RAISES | DESCRIPTION |
---|---|
ValueError
|
If the input_states shape is invalid or if the cache_params is not None and does not contain the required states. |
TypeError
|
If the input_states or cache_params are not of the expected types. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaModel
¶
Bases: MambaPreTrainedModel
A class representing the MambaModel.
This class is a Python implementation of the MambaModel, which is a deep learning model used for various natural language processing tasks. The MambaModel inherits from the MambaPreTrainedModel class.
ATTRIBUTE | DESCRIPTION |
---|---|
embeddings |
An instance of the nn.Embedding class representing the input embeddings.
TYPE:
|
layers |
A list of MambaBlock instances representing the layers of the model.
TYPE:
|
gradient_checkpointing |
A flag indicating whether gradient checkpointing is used during training.
TYPE:
|
norm_f |
An instance of the MambaRMSNorm class representing the normalization function.
TYPE:
|
METHOD | DESCRIPTION |
---|---|
__init__ |
Initializes the MambaModel instance. |
get_input_embeddings |
Returns the input embeddings. |
set_input_embeddings |
Sets the input embeddings to the specified value. |
forward |
Constructs the MambaModel. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaModel.__init__(config)
¶
Initializes an instance of the MambaModel class.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the class.
|
config |
An object that holds the configuration parameters for the model. It should have the following attributes:
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaModel.forward(input_ids=None, inputs_embeds=None, cache_params=None, use_cache=None, output_hidden_states=None, return_dict=None, **kwargs)
¶
This method forwards the MambaModel by processing input data through multiple mixer blocks.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MambaModel class.
|
input_ids |
The input tensor containing token indices. Default is None.
TYPE:
|
inputs_embeds |
The input tensor containing pre-computed embeddings. Default is None.
TYPE:
|
cache_params |
List of tensors used for caching intermediate states. Default is None.
TYPE:
|
use_cache |
Flag indicating whether to use caching. Default is None.
TYPE:
|
output_hidden_states |
Flag indicating whether to output hidden states. Default is None.
TYPE:
|
return_dict |
Flag indicating whether to return the output as a dictionary. Default is None.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Union[Tuple, MambaOutput]
|
Union[Tuple, MambaOutput]: The return value can be either a tuple containing hidden states, cache parameters, and all hidden states (if not None), or a MambaOutput object containing the last hidden state, cache parameters (if caching is enabled), and all hidden states. |
RAISES | DESCRIPTION |
---|---|
ValueError
|
Raised if both input_ids and inputs_embeds are specified simultaneously. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaModel.get_input_embeddings()
¶
Method to retrieve the input embeddings from the MambaModel instance.
PARAMETER | DESCRIPTION |
---|---|
self |
The MambaModel instance itself.
|
RETURNS | DESCRIPTION |
---|---|
embeddings
|
The input embeddings associated with the MambaModel instance. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaModel.set_input_embeddings(new_embeddings)
¶
Sets the input embeddings for the MambaModel.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MambaModel class.
TYPE:
|
new_embeddings |
The new input embeddings to be set. This parameter can be of any type.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaOutput
dataclass
¶
Bases: ModelOutput
Class for the MAMBA model outputs.
PARAMETER | DESCRIPTION |
---|---|
last_hidden_state |
Sequence of hidden-states at the output of the last layer of the model.
TYPE:
|
cache_params |
The state of the model at the last time step. Can be used in a forward method with the next Includes both the State space model states weights after the selective scan, and the Convolutional states
TYPE:
|
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaPreTrainedModel
¶
Bases: PreTrainedModel
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaRMSNorm
¶
Bases: Module
MambaRMSNorm is a neural network cell that represents a modified version of the RMS normalization layer. It inherits from the nn.Module class and provides functionality for normalizing hidden states in a neural network.
This class initializes the MambaRMSNorm layer with the specified hidden size and epsilon value for variance. The hidden_size parameter determines the size of the input hidden states, while the eps parameter sets the variance epsilon value for numerical stability.
The forward method of MambaRMSNorm takes hidden_states as input and performs RMS normalization on the input hidden states. It first converts the input hidden states to float32 data type, calculates the variance of the hidden states, and then applies the RMS normalization using the variance and epsilon values. The normalized hidden states are then multiplied by the weight parameter and converted back to the original input data type before being returned.
Note
The implementation details and usage of this class should be referenced from the source code and any related documentation.
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaRMSNorm.__init__(hidden_size, eps=1e-06)
¶
LlamaRMSNorm is equivalent to T5LayerNorm
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
275 276 277 278 279 280 281 |
|
mindnlp.transformers.models.mamba.modeling_mamba.MambaRMSNorm.forward(hidden_states)
¶
This method forwards the MambaRMSNorm operation by normalizing the hidden states.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MambaRMSNorm class.
TYPE:
|
hidden_states |
The input hidden states to be normalized. It should be a tensor containing the hidden states data.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
RAISES | DESCRIPTION |
---|---|
ValueError
|
If the input hidden_states tensor is not valid. |
RuntimeError
|
If there is an issue during the normalization process. |
Source code in mindnlp/transformers/models/mamba/modeling_mamba.py
283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 |
|
mindnlp.transformers.models.mamba.configuration_mamba
¶
MAMBA configuration
mindnlp.transformers.models.mamba.configuration_mamba.MambaConfig
¶
Bases: PretrainedConfig
This is the configuration class to store the configuration of a [MambaModel
]. It is used to instantiate a MAMBA
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the MAMBA
state-spaces/mamba-2.8b architecture.
Configuration objects inherit from [PretrainedConfig
] and can be used to control the model outputs. Read the
documentation from [PretrainedConfig
] for more information.
PARAMETER | DESCRIPTION |
---|---|
vocab_size |
Vocabulary size of the MAMBA model. Defines the number of different tokens that can be represented by the
TYPE:
|
hidden_size |
Dimensionality of the embeddings and hidden states.
TYPE:
|
state_size |
shape of the state space latents.
TYPE:
|
num_hidden_layers |
Number of hidden layers in the model.
TYPE:
|
layer_norm_epsilon |
The epsilon to use in the layer normalization layers.
TYPE:
|
pad_token_id |
Padding token id.
TYPE:
|
bos_token_id |
The id of the beginning of sentence token in the vocabulary.
TYPE:
|
eos_token_id |
The id of the end of sentence token in the vocabulary.
TYPE:
|
expand |
Expanding factor used to determine the intermediate size.
TYPE:
|
conv_kernel |
Size of the convolution kernel.
TYPE:
|
use_bias |
Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block
TYPE:
|
use_conv_bias |
Whether or not to use bias in the convolution layer of the mixer block.
TYPE:
|
hidden_act |
The non-linear activation function (function or string) in the decoder.
TYPE:
|
initializer_range |
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
TYPE:
|
residual_in_fp32 |
Whether or not residuals should be in
TYPE:
|
time_step_rank |
Rank of the the discretization projection matrix.
TYPE:
|
time_step_scale |
Scale used used to scale
TYPE:
|
time_step_min |
Minimum
TYPE:
|
time_step_max |
Maximum
TYPE:
|
time_step_init_scheme |
Init scheme used for
TYPE:
|
time_step_floor |
Minimum clamping value of the
TYPE:
|
rescale_prenorm_residual |
Whether or not to rescale
TYPE:
|
use_cache |
Whether or not the cache should be used.
TYPE:
|
Example
>>> from transformers import MambaConfig, MambaModel
...
>>> # Initializing a Mamba configuration
>>> configuration = MambaConfig()
...
>>> # Initializing a model (with random weights) from the configuration
>>> model = MambaModel(configuration)
...
>>> # Accessing the model configuration
>>> configuration = model.config
Source code in mindnlp/transformers/models/mamba/configuration_mamba.py
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
|
mindnlp.transformers.models.mamba.configuration_mamba.MambaConfig.__init__(vocab_size=50280, hidden_size=768, state_size=16, num_hidden_layers=32, layer_norm_epsilon=1e-05, pad_token_id=0, bos_token_id=0, eos_token_id=0, expand=2, conv_kernel=4, use_bias=False, use_conv_bias=True, hidden_act='silu', initializer_range=0.1, residual_in_fp32=True, time_step_rank='auto', time_step_scale=1.0, time_step_min=0.001, time_step_max=0.1, time_step_init_scheme='random', time_step_floor=0.0001, rescale_prenorm_residual=False, use_cache=True, **kwargs)
¶
Initializes a new instance of the MambaConfig class.
PARAMETER | DESCRIPTION |
---|---|
self |
The current instance of the MambaConfig class.
TYPE:
|
vocab_size |
The size of the vocabulary. Defaults to 50280.
TYPE:
|
hidden_size |
The size of the hidden state. Defaults to 768.
TYPE:
|
state_size |
The size of the state. Defaults to 16.
TYPE:
|
num_hidden_layers |
The number of hidden layers. Defaults to 32.
TYPE:
|
layer_norm_epsilon |
The epsilon value for layer normalization. Defaults to 1e-05.
TYPE:
|
pad_token_id |
The token ID for padding. Defaults to 0.
TYPE:
|
bos_token_id |
The token ID for the beginning of sequence. Defaults to 0.
TYPE:
|
eos_token_id |
The token ID for the end of sequence. Defaults to 0.
TYPE:
|
expand |
The expansion factor. Defaults to 2.
TYPE:
|
conv_kernel |
The kernel size for convolution. Defaults to 4.
TYPE:
|
use_bias |
Whether to use bias. Defaults to False.
TYPE:
|
use_conv_bias |
Whether to use bias in convolution. Defaults to True.
TYPE:
|
hidden_act |
The activation function for hidden layers. Defaults to 'silu'.
TYPE:
|
initializer_range |
The range for weight initialization. Defaults to 0.1.
TYPE:
|
residual_in_fp32 |
Whether to keep residual in FP32. Defaults to True.
TYPE:
|
time_step_rank |
The rank or 'auto' for time step. Defaults to 'auto'.
TYPE:
|
time_step_scale |
The scale factor for time step. Defaults to 1.0.
TYPE:
|
time_step_min |
The minimum value for time step. Defaults to 0.001.
TYPE:
|
time_step_max |
The maximum value for time step. Defaults to 0.1.
TYPE:
|
time_step_init_scheme |
The initialization scheme for time step. Defaults to 'random'.
TYPE:
|
time_step_floor |
The floor value for time step. Defaults to 0.0001.
TYPE:
|
rescale_prenorm_residual |
Whether to rescale pre-norm residuals. Defaults to False.
TYPE:
|
use_cache |
Whether to use cache. Defaults to True.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/configuration_mamba.py
107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba
¶
MindSpore MAMBA model.
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaBlock
¶
Bases: Module
The MSMambaBlock class represents a block for the MSMamba model. It inherits from the nn.Module class and is designed to handle the configuration and processing of hidden states for the MSMamba model.
ATTRIBUTE | DESCRIPTION |
---|---|
config |
An object containing configuration settings for the block.
|
layer_idx |
An integer representing the index of the layer.
|
residual_in_fp32 |
A boolean indicating whether residual values are in 32-bit floating point format.
|
norm |
An instance of the MSMambaRMSNorm class for performing layer normalization.
|
mixer |
An instance of the MSMambaMixer class for mixing hidden states based on the configuration and layer index.
|
METHOD | DESCRIPTION |
---|---|
forward |
Processes the input hidden states using the configured normalization and mixing operations, and returns the processed hidden states. |
Note
This class is part of the MSMamba model and is specifically designed for handling the processing of hidden states within the model architecture.
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaBlock.__init__(config, layer_idx)
¶
Initializes a new instance of the MSMambaBlock class.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MSMambaBlock class.
TYPE:
|
config |
The configuration object containing various settings.
TYPE:
|
layer_idx |
The index of the layer in the model.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaBlock.forward(hidden_states, cache_params=None)
¶
Constructs the MSMambaBlock.
PARAMETER | DESCRIPTION |
---|---|
self |
An instance of the MSMambaBlock class.
TYPE:
|
hidden_states |
The input hidden states to the block.
TYPE:
|
cache_params |
A dictionary containing cache parameters (default: None).
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaCache
¶
The MSMambaCache
class represents a cache for storing intermediate states and parameters used in the
MSMamba algorithm. It is designed to be used in conjunction with the MSMambaModel
class.
This class provides functionality for initializing the cache and storing intermediate states and parameters.
The cache is used to store the convolutional states (conv_states
) and the state-space model
states (ssm_states
) for each hidden layer in the MSMamba algorithm.
The cache is initialized with zero tensors of appropriate shapes.
ATTRIBUTE | DESCRIPTION |
---|---|
`seqlen_offset` |
A parameter representing the sequence length offset.
|
`dtype` |
The data type of the cache tensors (default: mindspore.float16).
|
`conv_states` |
A parameter storing the convolutional states for each hidden layer. It is a tensor of shape (num_hidden_layers, batch_size, intermediate_size, conv_kernel_size).
|
`ssm_states` |
A parameter storing the state-space model states for each hidden layer. It is a tensor of shape (num_hidden_layers, batch_size, intermediate_size, ssm_state_size).
|
Note
This class inherits from [Parent Class Name].
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaCache.__init__(config, batch_size, dtype=mindspore.float16)
¶
This method initializes an instance of the MSMambaCache class.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the class.
TYPE:
|
config |
The configuration object containing parameters for the cache.
TYPE:
|
batch_size |
The size of the batch for processing.
TYPE:
|
dtype |
The data type for the cache, defaults to mindspore.float16.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
RAISES | DESCRIPTION |
---|---|
ValueError
|
If the batch_size is not a positive integer. |
TypeError
|
If the dtype is not a valid data type. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaForCausalLM
¶
Bases: MSMambaPreTrainedModel
MSMambaForCausalLM is a class that represents a Mamba model for Causal Language Modeling. It inherits from MSMambaPreTrainedModel and includes methods for setting and getting input and output embeddings, as well as preparing inputs for generation and forwarding the model for training and evaluation.
The class includes the following methods:
- init: Initializes the model with a given configuration.
- get_output_embeddings: Retrieves the output embeddings of the model.
- set_output_embeddings: Sets new output embeddings for the model.
- get_input_embeddings: Retrieves the input embeddings of the model.
- set_input_embeddings: Sets new input embeddings for the model.
- _update_model_kwargs_for_generation: Updates model keyword arguments for generation.
- prepare_inputs_for_generation: Prepares inputs for generation based on the given parameters.
- forward: Constructs the model for training and evaluation, including handling labels for language modeling and computing loss.
When utilizing the MSMambaForCausalLM class, users can easily manage input and output embeddings, prepare inputs for generating text, and forward the model for training and evaluation purposes.
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaForCausalLM.__init__(config)
¶
Initializes an instance of MSMambaForCausalLM.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the class.
TYPE:
|
config |
An object containing configuration parameters.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaForCausalLM.forward(input_ids=None, inputs_embeds=None, cache_params=None, labels=None, output_hidden_states=None, return_dict=None, **kwargs)
¶
PARAMETER | DESCRIPTION |
---|---|
labels |
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
TYPE:
|
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaForCausalLM.get_input_embeddings()
¶
Retrieve the input embeddings from the MSMambaForCausalLM model.
PARAMETER | DESCRIPTION |
---|---|
self |
An instance of the MSMambaForCausalLM class.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaForCausalLM.get_output_embeddings()
¶
Method to retrieve the output embeddings from the MSMambaForCausalLM model.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MSMambaForCausalLM class.
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaForCausalLM.prepare_inputs_for_generation(input_ids, cache_params=None, inputs_embeds=None, **kwargs)
¶
Prepare inputs for generation.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MSMambaForCausalLM class.
TYPE:
|
input_ids |
The input tensor containing tokenized input sequence.
TYPE:
|
cache_params |
Parameters for caching intermediate computations.
TYPE:
|
inputs_embeds |
The embedded input tensor.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
dict
|
The model inputs containing either 'inputs_embeds' or 'input_ids' based on the availability of 'inputs_embeds' and 'cache_params'. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaForCausalLM.set_input_embeddings(new_embeddings)
¶
Sets the input embeddings for the MSMambaForCausalLM model.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MSMambaForCausalLM class.
TYPE:
|
new_embeddings |
The new input embeddings to be set for the model. Should be a tensor of shape (vocab_size, embedding_dim).
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None
|
The method sets the input embeddings for the model and does not return any value. |
RAISES | DESCRIPTION |
---|---|
ValueError
|
If the new_embeddings tensor does not have the correct shape (vocab_size, embedding_dim). |
TypeError
|
If the new_embeddings parameter is not a tensor. |
RuntimeError
|
If the operation to set the input embeddings fails for any reason. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaForCausalLM.set_output_embeddings(new_embeddings)
¶
Sets the output embeddings of the MSMambaForCausalLM model.
PARAMETER | DESCRIPTION |
---|---|
self |
The MSMambaForCausalLM object.
TYPE:
|
new_embeddings |
The new embeddings to be set as the output embeddings.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
This method allows for setting the output embeddings of the MSMambaForCausalLM model. The output embeddings are used in the generation of predictions by the language model head. By setting new embeddings, you can modify the characteristics of the generated predictions.
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaMixer
¶
Bases: Module
Compute ∆, A, B, C, and D the state space parameters and compute the contextualized_states
.
A, D are input independent (see MSMamba paper [1] Section 3.5.2 "Interpretation of A" for why A isn't selective)
∆, B, C are input-dependent (this is a key difference between MSMamba and the linear time invariant S4,
and is why MSMamba is called selective state spaces)
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaMixer.__init__(config, layer_idx)
¶
Initializes an instance of the MSMambaMixer class.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the class.
|
config |
An object containing configuration parameters for the mixer.
|
layer_idx |
Index of the current layer.
|
RETURNS | DESCRIPTION |
---|---|
None |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaMixer.forward(input_states, cache_params=None)
¶
Constructs contextualized states using the MSMambaMixer algorithm.
PARAMETER | DESCRIPTION |
---|---|
self |
An instance of the MSMambaMixer class.
TYPE:
|
input_states |
The input states of shape (batch_size, seq_len, _).
TYPE:
|
cache_params |
The cache parameters. Defaults to None.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaModel
¶
Bases: MSMambaPreTrainedModel
MSMambaModel represents a model for MSMamba that inherits from MSMambaPreTrainedModel.
ATTRIBUTE | DESCRIPTION |
---|---|
embeddings |
An embedding layer for the model's vocabulary.
TYPE:
|
layers |
A list of MSMambaBlock layers for the model.
TYPE:
|
gradient_checkpointing |
Indicates if gradient checkpointing is enabled.
TYPE:
|
norm_f |
Normalization function for the model's hidden states.
TYPE:
|
METHOD | DESCRIPTION |
---|---|
__init__ |
Initializes the MSMambaModel with the given configuration. |
get_input_embeddings |
Retrieves the input embeddings for the model. |
set_input_embeddings |
Sets new input embeddings for the model. |
forward |
Constructs the model based on the input and configuration parameters. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaModel.__init__(config)
¶
Initializes an instance of MSMambaModel.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of MSMambaModel.
TYPE:
|
config |
The configuration object containing parameters for the model. Must include the following attributes:
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaModel.forward(input_ids=None, inputs_embeds=None, cache_params=None, use_cache=None, output_hidden_states=None, return_dict=None, **kwargs)
¶
Construct the MSMambaModel.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MSMambaModel.
TYPE:
|
input_ids |
The input tensor containing the indices of tokens in the input sequence. Default is None.
TYPE:
|
inputs_embeds |
The input tensor for the embeddings. Default is None.
TYPE:
|
cache_params |
The optional cache parameters for the model. Default is None.
TYPE:
|
use_cache |
Flag to use cache. Default is None.
TYPE:
|
output_hidden_states |
Flag to output hidden states. Default is None.
TYPE:
|
return_dict |
Flag to return a dictionary. Default is None.
TYPE:
|
**kwargs |
Additional keyword arguments.
DEFAULT:
|
RETURNS | DESCRIPTION |
---|---|
Union[Tuple, Dict]
|
Union[Tuple, Dict]: Depending on the value of 'return_dict', it returns either a tuple or a dictionary.
|
RAISES | DESCRIPTION |
---|---|
ValueError
|
If the input_ids and inputs_embeds are both None. |
RuntimeError
|
If an error occurs during the forwardion process. |
TypeError
|
If the input_ids or inputs_embeds are not of type mindspore.Tensor. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaModel.get_input_embeddings()
¶
Retrieve the input embeddings for the MSMambaModel.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MSMambaModel class.
|
RETURNS | DESCRIPTION |
---|---|
The embeddings associated with the input. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaModel.set_input_embeddings(new_embeddings)
¶
Set the input embeddings for the MSMambaModel.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MSMambaModel class.
TYPE:
|
new_embeddings |
The new input embeddings to be set for the MSMambaModel.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaPreTrainedModel
¶
Bases: PreTrainedModel
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaPreTrainedModel.__call__(*args, **kwargs)
¶
This method call is defined within the class MSMambaPreTrainedModel and is used to handle the call operation when an instance of the class is called as a function.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MSMambaPreTrainedModel class.
|
RETURNS | DESCRIPTION |
---|---|
Conditional returns:
|
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaRMSNorm
¶
Bases: Module
MSMambaRMSNorm is a class that represents a modified version of the T5LayerNorm, called LlamaRMSNorm. It is designed to normalize the hidden states of a neural network layer.
This class inherits from nn.Module and provides functionality to normalize the hidden states using a modified RMS normalization technique.
ATTRIBUTE | DESCRIPTION |
---|---|
weight |
A parameter tensor that stores the weight values for the normalization.
TYPE:
|
variance_epsilon |
A small value added to the variance to avoid division by zero.
TYPE:
|
METHOD | DESCRIPTION |
---|---|
__init__ |
Initializes an instance of MSMambaRMSNorm. |
forward |
Normalizes the input hidden states using the RMS normalization technique. |
Note
- The input hidden states are expected to be of shape (batch_size, sequence_length, hidden_size).
- The normalization is performed along the last dimension (hidden_size).
Example
>>> hidden_states = ops.random_normal((batch_size, sequence_length, hidden_size))
>>> norm_layer = MSMambaRMSNorm(hidden_size)
>>> normalized_states = norm_layer.forward(hidden_states)
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaRMSNorm.__init__(hidden_size, eps=1e-06)
¶
LlamaRMSNorm is equivalent to T5LayerNorm
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
330 331 332 333 334 335 336 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MSMambaRMSNorm.forward(hidden_states)
¶
Constructs an instance of MSMambaRMSNorm.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MSMambaRMSNorm class.
TYPE:
|
hidden_states |
The input tensor containing the hidden states. It should be of type tensor and have a shape (batch_size, sequence_length, hidden_size).
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None
|
The method modifies the hidden_states tensor in-place. |
RAISES | DESCRIPTION |
---|---|
TypeError
|
If the hidden_states parameter is not of type tensor. |
ValueError
|
If the hidden_states tensor does not have the expected shape. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MambaDense
¶
Bases: Linear
MambaDense represents a dense layer in a neural network. It performs matrix multiplication with optional bias addition and reshaping of input data. This class inherits from nn.Linear.
Example
>>> def forward(self, x):
>>> x_shape = x.shape
>>> if len(x_shape) != 2:
>>> x = x.reshape(-1, x.shape[-1])
>>> x = ops.matmul(x, self.weight.T)
>>> if self.bias:
>>> x = ops.add(x, self.bias)
>>> if len(x_shape) != 2:
>>> out_shape = x_shape[:-1] + (x.shape[-1], )
>>> x = x.reshape(out_shape)
>>> return x
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
|
mindnlp.transformers.models.mamba.modeling_graph_mamba.MambaDense.forward(x)
¶
Constructs the output of the MambaDense layer by performing matrix multiplication with weights and adding bias if applicable.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the MambaDense class.
TYPE:
|
x |
Input data for the layer. Should be a 2D numpy array, but will reshape to 2D if necessary.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
ndarray
|
The output of the MambaDense layer after matrix multiplication with weights and addition of bias if specified. |
RAISES | DESCRIPTION |
---|---|
ValueError
|
If the input data x is not a 2D numpy array. |
Source code in mindnlp/transformers/models/mamba/modeling_graph_mamba.py
66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
|