Skip to content

cpmbee

mindnlp.transformers.models.cpmbee.configuration_cpmbee

CpmBee model configuration

mindnlp.transformers.models.cpmbee.configuration_cpmbee.CpmBeeConfig

Bases: PretrainedConfig

This is the configuration class to store the configuration of a [CpmBeeModel]. It is used to instbeeiate an CPMBee model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CPMBee openbmb/cpm-bee-10b architecture.

Configuration objects inherit from [PretrainedConfig] and can be used to control the model outputs. Read the documentation from [PretrainedConfig] for more information.

PARAMETER DESCRIPTION
vocab_size

Vocabulary size of the CPMBee model. Defines the number of different tokens that can be represented by the input passed when calling [CpmBeeModel].

TYPE: `int`, *optional*, defaults to 30720 DEFAULT: 30720

hidden_size

Dimension of the encoder layers.

TYPE: `int`, *optional*, defaults to 4096 DEFAULT: 4096

num_attention_heads

Number of attention heads in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 32 DEFAULT: 64

dim_head

Dimension of attention heads for each attention layer in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 128 DEFAULT: 64

dim_ff

Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 10240 DEFAULT: 10240

num_hidden_layers

Number of layers of the Transformer encoder.

TYPE: `int`, *optional*, defaults to 48 DEFAULT: 32

dropout_p

The dropout probabilitiy for all fully connected layers in the embeddings, encoder.

TYPE: `float`, *optional*, defaults to 0.1 DEFAULT: 0.0

position_bias_num_buckets

The number of position_bias buckets.

TYPE: `int`, *optional*, defaults to 512 DEFAULT: 256

position_bias_num_segment_buckets

The number of segment buckets.

TYPE: `int`, *optional*, defaults to 32 DEFAULT: 32

position_bias_max_distance

The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).

TYPE: `int`, *optional*, defaults to 2048 DEFAULT: 2048

eps

The epsilon used by the layer normalization layers.

TYPE: `float`, *optional*, defaults to 1e-6 DEFAULT: 1e-06

init_std

Initialize parameters with std = init_std.

TYPE: `float`, *optional*, defaults to 1.0 DEFAULT: 1.0

use_cache

Whether to use cache.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

distance_scale

Scale the rotary embedding.

TYPE: `float` or `int`, *optional*, defaults to 16 DEFAULT: 16

mask_modules

Decides which feedforward block or attention block is pruned.

TYPE: `list` or `tuple`, *optional*, defaults to None DEFAULT: None

half

Decides the model parameters are half-precision or not.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

Example
>>> from transformers import CpmBeeModel, CpmBeeConfig
...
>>> # Initializing a CPMBee cpm-bee-10b style configuration
>>> configuration = CpmBeeConfig()
...
>>> # Initializing a model from the cpm-bee-10b style configuration
>>> model = CpmBeeModel(configuration)
...
>>> # Accessing the model configuration
>>> configuration = model.config
Source code in mindnlp/transformers/models/cpmbee/configuration_cpmbee.py
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
class CpmBeeConfig(PretrainedConfig):
    r"""
    This is the configuration class to store the configuration of a [`CpmBeeModel`]. It is used to instbeeiate an
    CPMBee model according to the specified arguments, defining the model architecture. Instantiating a configuration
    with the defaults will yield a similar configuration to that of the CPMBee
    [openbmb/cpm-bee-10b](https://hf-mirror.com/openbmb/cpm-bee-10b) architecture.

    Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
    documentation from [`PretrainedConfig`] for more information.

    Args:
        vocab_size (`int`, *optional*, defaults to 30720):
            Vocabulary size of the CPMBee model. Defines the number of different tokens that can be represented by the
            `input` passed when calling [`CpmBeeModel`].
        hidden_size (`int`, *optional*, defaults to 4096):
            Dimension of the encoder layers.
        num_attention_heads (`int`, *optional*, defaults to 32):
            Number of attention heads in the Transformer encoder.
        dim_head (`int`, *optional*, defaults to 128):
            Dimension of attention heads for each attention layer in the Transformer encoder.
        dim_ff (`int`, *optional*, defaults to 10240):
            Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
        num_hidden_layers (`int`, *optional*, defaults to 48):
            Number of layers of the Transformer encoder.
        dropout_p (`float`, *optional*, defaults to 0.1):
            The dropout probabilitiy for all fully connected layers in the embeddings, encoder.
        position_bias_num_buckets (`int`, *optional*, defaults to 512):
            The number of position_bias buckets.
        position_bias_num_segment_buckets (`int`, *optional*, defaults to 32):
            The number of segment buckets.
        position_bias_max_distance (`int`, *optional*, defaults to 2048):
            The maximum sequence length that this model might ever be used with. Typically set this to something large
            just in case (e.g., 512 or 1024 or 2048).
        eps (`float`, *optional*, defaults to 1e-6):
            The epsilon used by the layer normalization layers.
        init_std (`float`, *optional*, defaults to 1.0):
            Initialize parameters with std = init_std.
        use_cache (`bool`, *optional*, defaults to `True`):
            Whether to use cache.
        distance_scale (`float` or `int`, *optional*, defaults to 16):
            Scale the rotary embedding.
        mask_modules (`list` or `tuple`, *optional*, defaults to None):
            Decides which feedforward block or attention block is pruned.
        half (`bool`, *optional*, defaults to `False`):
            Decides the model parameters are half-precision or not.

    Example:
        ```python
        >>> from transformers import CpmBeeModel, CpmBeeConfig
        ...
        >>> # Initializing a CPMBee cpm-bee-10b style configuration
        >>> configuration = CpmBeeConfig()
        ...
        >>> # Initializing a model from the cpm-bee-10b style configuration
        >>> model = CpmBeeModel(configuration)
        ...
        >>> # Accessing the model configuration
        >>> configuration = model.config
        ```
    """
    model_type = "cpmbee"

    def __init__(
        self,
        vocab_size: int = 30720,
        hidden_size: int = 4096,
        num_attention_heads: int = 64,
        dim_head: int = 64,
        dim_ff: int = 10240,
        num_hidden_layers: int = 32,
        dropout_p: int = 0.0,
        position_bias_num_buckets: int = 256,
        position_bias_num_segment_buckets: int = 32,
        position_bias_max_distance: int = 2048,
        eps: int = 1e-6,
        init_std: float = 1.0,
        use_cache: bool = True,
        distance_scale: Union[int, float] = 16,
        mask_modules: Optional[Union[List, Tuple]] = None,
        half: bool = False,
        **kwargs,
    ):
        """
        __init__

        Initializes a CpmBeeConfig instance.

        Args:
            vocab_size (int): The size of the vocabulary. Defaults to 30720.
            hidden_size (int): The size of the hidden layers. Defaults to 4096.
            num_attention_heads (int): The number of attention heads. Defaults to 64.
            dim_head (int): The dimension of each attention head. Defaults to 64.
            dim_ff (int): The dimension of the feed forward network. Defaults to 10240.
            num_hidden_layers (int): The number of hidden layers. Defaults to 32.
            dropout_p (int): The dropout probability. Defaults to 0.0.
            position_bias_num_buckets (int): The number of buckets for position bias. Defaults to 256.
            position_bias_num_segment_buckets (int): The number of segment buckets for position bias. Defaults to 32.
            position_bias_max_distance (int): The maximum distance for position bias. Defaults to 2048.
            eps (int): A small value to avoid division by zero. Defaults to 1e-06.
            init_std (float): The standard deviation for weight initialization. Defaults to 1.0.
            use_cache (bool): Flag to indicate whether to use cache. Defaults to True.
            distance_scale (Union[int, float]): The scale factor for distance. Defaults to 16.
            mask_modules (Optional[Union[List, Tuple]]): List or Tuple of modules to be masked. Defaults to None.
            half (bool): Flag to indicate whether to use half precision. Defaults to False.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__(**kwargs)
        self.position_bias_num_segment_buckets = position_bias_num_segment_buckets
        self.hidden_size = hidden_size
        self.num_attention_heads = num_attention_heads
        self.dim_head = dim_head
        self.dim_ff = dim_ff
        self.num_hidden_layers = num_hidden_layers
        self.position_bias_num_buckets = position_bias_num_buckets
        self.position_bias_max_distance = position_bias_max_distance
        self.dropout_p = dropout_p
        self.eps = eps
        self.use_cache = use_cache
        self.vocab_size = vocab_size
        self.init_std = init_std
        self.distance_scale = distance_scale
        self.half = half
        self.mask_modules = mask_modules

mindnlp.transformers.models.cpmbee.configuration_cpmbee.CpmBeeConfig.__init__(vocab_size=30720, hidden_size=4096, num_attention_heads=64, dim_head=64, dim_ff=10240, num_hidden_layers=32, dropout_p=0.0, position_bias_num_buckets=256, position_bias_num_segment_buckets=32, position_bias_max_distance=2048, eps=1e-06, init_std=1.0, use_cache=True, distance_scale=16, mask_modules=None, half=False, **kwargs)

init

Initializes a CpmBeeConfig instance.

PARAMETER DESCRIPTION
vocab_size

The size of the vocabulary. Defaults to 30720.

TYPE: int DEFAULT: 30720

hidden_size

The size of the hidden layers. Defaults to 4096.

TYPE: int DEFAULT: 4096

num_attention_heads

The number of attention heads. Defaults to 64.

TYPE: int DEFAULT: 64

dim_head

The dimension of each attention head. Defaults to 64.

TYPE: int DEFAULT: 64

dim_ff

The dimension of the feed forward network. Defaults to 10240.

TYPE: int DEFAULT: 10240

num_hidden_layers

The number of hidden layers. Defaults to 32.

TYPE: int DEFAULT: 32

dropout_p

The dropout probability. Defaults to 0.0.

TYPE: int DEFAULT: 0.0

position_bias_num_buckets

The number of buckets for position bias. Defaults to 256.

TYPE: int DEFAULT: 256

position_bias_num_segment_buckets

The number of segment buckets for position bias. Defaults to 32.

TYPE: int DEFAULT: 32

position_bias_max_distance

The maximum distance for position bias. Defaults to 2048.

TYPE: int DEFAULT: 2048

eps

A small value to avoid division by zero. Defaults to 1e-06.

TYPE: int DEFAULT: 1e-06

init_std

The standard deviation for weight initialization. Defaults to 1.0.

TYPE: float DEFAULT: 1.0

use_cache

Flag to indicate whether to use cache. Defaults to True.

TYPE: bool DEFAULT: True

distance_scale

The scale factor for distance. Defaults to 16.

TYPE: Union[int, float] DEFAULT: 16

mask_modules

List or Tuple of modules to be masked. Defaults to None.

TYPE: Optional[Union[List, Tuple]] DEFAULT: None

half

Flag to indicate whether to use half precision. Defaults to False.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/cpmbee/configuration_cpmbee.py
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
def __init__(
    self,
    vocab_size: int = 30720,
    hidden_size: int = 4096,
    num_attention_heads: int = 64,
    dim_head: int = 64,
    dim_ff: int = 10240,
    num_hidden_layers: int = 32,
    dropout_p: int = 0.0,
    position_bias_num_buckets: int = 256,
    position_bias_num_segment_buckets: int = 32,
    position_bias_max_distance: int = 2048,
    eps: int = 1e-6,
    init_std: float = 1.0,
    use_cache: bool = True,
    distance_scale: Union[int, float] = 16,
    mask_modules: Optional[Union[List, Tuple]] = None,
    half: bool = False,
    **kwargs,
):
    """
    __init__

    Initializes a CpmBeeConfig instance.

    Args:
        vocab_size (int): The size of the vocabulary. Defaults to 30720.
        hidden_size (int): The size of the hidden layers. Defaults to 4096.
        num_attention_heads (int): The number of attention heads. Defaults to 64.
        dim_head (int): The dimension of each attention head. Defaults to 64.
        dim_ff (int): The dimension of the feed forward network. Defaults to 10240.
        num_hidden_layers (int): The number of hidden layers. Defaults to 32.
        dropout_p (int): The dropout probability. Defaults to 0.0.
        position_bias_num_buckets (int): The number of buckets for position bias. Defaults to 256.
        position_bias_num_segment_buckets (int): The number of segment buckets for position bias. Defaults to 32.
        position_bias_max_distance (int): The maximum distance for position bias. Defaults to 2048.
        eps (int): A small value to avoid division by zero. Defaults to 1e-06.
        init_std (float): The standard deviation for weight initialization. Defaults to 1.0.
        use_cache (bool): Flag to indicate whether to use cache. Defaults to True.
        distance_scale (Union[int, float]): The scale factor for distance. Defaults to 16.
        mask_modules (Optional[Union[List, Tuple]]): List or Tuple of modules to be masked. Defaults to None.
        half (bool): Flag to indicate whether to use half precision. Defaults to False.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__(**kwargs)
    self.position_bias_num_segment_buckets = position_bias_num_segment_buckets
    self.hidden_size = hidden_size
    self.num_attention_heads = num_attention_heads
    self.dim_head = dim_head
    self.dim_ff = dim_ff
    self.num_hidden_layers = num_hidden_layers
    self.position_bias_num_buckets = position_bias_num_buckets
    self.position_bias_max_distance = position_bias_max_distance
    self.dropout_p = dropout_p
    self.eps = eps
    self.use_cache = use_cache
    self.vocab_size = vocab_size
    self.init_std = init_std
    self.distance_scale = distance_scale
    self.half = half
    self.mask_modules = mask_modules

mindnlp.transformers.models.cpmbee.tokenization_cpmbee

Tokenization classes for CpmBee.

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer

Bases: PreTrainedTokenizer

Construct a CPMBee tokenizer.

PARAMETER DESCRIPTION
vocab_file

Path to the vocabulary file.

TYPE: `str`

bos_token

The beginning of sequence token.

TYPE: `str`, *optional*, defaults to `"<s>"` DEFAULT: '<s>'

eos_token

The end of sequence token.

TYPE: `str`, *optional*, defaults to `"</s>"` DEFAULT: '</s>'

line_token

The line token.

TYPE: `str`, *optional*, defaults to `"\n"` DEFAULT: '\n'

space_token

The space token.

TYPE: `str`, *optional*, defaults to `" "` DEFAULT: ' '

unk_token

The unknown token.

TYPE: `str`, *optional*, defaults to `"<unk>"` DEFAULT: '<unk>'

mask_token

The mask token.

TYPE: `str`, *optional*, defaults to `"<mask>"` DEFAULT: '<mask>'

pad_token

The token used for padding.

TYPE: `str`, *optional*, defaults to `"<pad>"` DEFAULT: '<pad>'

padding_side

The padding side. CPM-Bee will use left padding by default.

TYPE: `str`, *optional*, defaults to `"left"` DEFAULT: 'left'

Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
class CpmBeeTokenizer(PreTrainedTokenizer):
    r"""
    Construct a CPMBee tokenizer.

    Args:
        vocab_file (`str`):
            Path to the vocabulary file.
        bos_token (`str`, *optional*, defaults to `"<s>"`):
            The beginning of sequence token.
        eos_token (`str`, *optional*, defaults to `"</s>"`):
            The end of sequence token.
        line_token (`str`, *optional*, defaults to `"\n"`):
            The line token.
        space_token (`str`, *optional*, defaults to `" "`):
            The space token.
        unk_token (`str`, *optional*, defaults to `"<unk>"`):
            The unknown token.
        mask_token (`str`, *optional*, defaults to `"<mask>"`):
            The mask token.
        pad_token (`str`, *optional*, defaults to `"<pad>"`):
            The token used for padding.
        padding_side (`str`, *optional*, defaults to `"left"`):
            The padding side. CPM-Bee will use left padding by default.
    """
    vocab_files_names = VOCAB_FILES_NAMES
    pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
    max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
    model_input_names: List[str] = [
        "input_ids",
        "attention_mask",
        "input_id_sub",
        "position",
        "context",
        "sample_ids",
        "num_segments",
        "segment",
        "segment_rel_offset",
        "segment_rel",
    ]
    add_prefix_space = False

    def __init__(
        self,
        vocab_file,
        bos_token="<s>",
        eos_token="</s>",
        line_token="\n",
        space_token=" ",
        unk_token="<unk>",
        mask_token="<mask>",
        pad_token="<pad>",
        padding_side="left",
        **kwargs,
    ):
        """
        Initialize a CpmBeeTokenizer object.

        Args:
            vocab_file (str): The path to the file containing the vocabulary.
            bos_token (str, optional): The beginning of sentence token.
            eos_token (str, optional): The end of sentence token.
            line_token (str, optional): The token used to represent a new line.
            space_token (str, optional): The token used to represent a space.
            unk_token (str, optional): The token used to represent unknown words.
            mask_token (str, optional): The token used for masking.
            pad_token (str, optional): The token used for padding.
            padding_side (str, optional): The side to apply padding.
            **kwargs: Additional keyword arguments.

        Returns:
            None.

        Raises:
            FileNotFoundError: If the vocab_file does not exist.
            TypeError: If any of the arguments are of incorrect type.
        """
        self.encoder: Dict[str, int] = {}
        super().__init__(
            bos_token=bos_token,
            eos_token=eos_token,
            line_token=line_token,
            space_token=space_token,
            unk_token=unk_token,
            mask_token=mask_token,
            pad_token=pad_token,
            padding_side=padding_side,
            **kwargs,
        )

        with open(vocab_file, "r", encoding="utf-8") as reader:
            for token in reader.readlines():
                token = token.rstrip("\n")
                if len(token) == 0:
                    continue
                self.encoder[token] = len(self.encoder)

        self.encoder[" "] = self.encoder["</_>"]
        self.encoder["\n"] = self.encoder["</n>"]
        del self.encoder["</_>"]
        del self.encoder["</n>"]

        self.decoder = {v: k for k, v in self.encoder.items()}

        self._max_word_len = max(len(x) for x in self.encoder.keys())
        self.cpmbee_special_tokens = {k: v for k, v in self.encoder.items() if k.startswith("<") and k.endswith(">")}

        self.ext_table: Dict[int, str] = {}
        self.ext_table_rev: Dict[str, int] = {}

        self.token_id_table: Dict[str, Dict[int, int]] = {}
        self.ext_special_tokens = []

        self.ext_args_for_model = [
            "input_id_subs",
            "input_pos",
            "context",
            "segment_ids",
            "segment_rel_offset",
            "segment_rel",
            "sample_ids",
            "num_segments",
            "predict_segments",
            "answer_placeholders",
            "ext_table",
            "token_id_table",
        ]

    @property
    def bod_token_id(self):
        """
        Returns the token ID for the beginning of document (BOD) token.

        Args:
            self: An instance of the CpmBeeTokenizer class.

        Returns:
            None: This method returns the token ID corresponding to the BOD token in the encoder dictionary.

        Raises:
            None.
        """
        return self.encoder[self.bod_token]

    @property
    def eod_token_id(self):
        """
        Method to retrieve the token ID corresponding to the end-of-document token in the CpmBeeTokenizer class.

        Args:
            self: An instance of the CpmBeeTokenizer class.

        Returns:
            None: The method returns the token ID of the end-of-document token in the tokenizer's encoder.

        Raises:
            None.
        """
        return self.encoder[self.eod_token]

    @property
    def newline_id(self):
        """
        Returns the ID of the newline token in the CpmBeeTokenizer.

        Args:
            self (CpmBeeTokenizer): An instance of the CpmBeeTokenizer class.

        Returns:
            None.

        Raises:
            None.
        """
        return self.encoder[self.line_token]

    @property
    def vocab_size(self) -> int:
        """
        Returns the size of the vocabulary used by the CpmBeeTokenizer instance.

        Args:
            self:
                The CpmBeeTokenizer instance.

                - This parameter is of type 'CpmBeeTokenizer'.
                - It represents the instance of the CpmBeeTokenizer class on which the method is called.

        Returns:
            int:
                An integer representing the size of the vocabulary.

                - The returned value represents the total number of unique tokens in the vocabulary.

        Raises:
            None.

        Example:
            ```python
            >>> tokenizer = CpmBeeTokenizer()
            >>> tokenizer.vocab_size()
            5000
            ```
        """
        return len(self.encoder)

    def __len__(self):
        """
        Size of the full vocabulary with the added tokens.
        """
        return self.vocab_size + len(self.added_tokens_encoder)

    def get_vocab(self):
        """
        Get the vocabulary of the CpmBeeTokenizer instance.

        Args:
            self (CpmBeeTokenizer): The instance of the CpmBeeTokenizer class.
                This parameter represents the current instance of the tokenizer.

        Returns:
            dict: A dictionary containing the combined encoder and added tokens encoder.
                The keys represent tokens, and the values represent their corresponding IDs.

        Raises:
            None.
        """
        return dict(self.encoder, **self.added_tokens_encoder)

    def get_piece(self, text: str) -> str:
        """
        Match with maximum length.
        """
        len_text = len(text)
        for i in range(len(text)):
            sub = text[: len_text - i]
            if (sub in self.encoder) or (sub in self.added_tokens_encoder):
                return sub
        return text[0]

    def tokenize(self, text: TextInput, **kwargs) -> List[str]:
        r"""
        Override the `tokenize` to meet the needs of CPMBee:

        1. Mark the special token with `<` and `>`. The `<>` will be ignored.
        2. Split sentences by the marked special tokens.
        3. Record the marked special token by `ext_table` and `ext_table_rev`.
        4. Tokenize the sentence without special tokens.
        """
        for_cpmbee = kwargs.get("for_cpmbee", False)
        all_special_tokens_extended = {
            str(t): t for t in self.all_special_tokens_extended if isinstance(t, AddedToken)
        }

        sentence_split = [""]
        is_special_token = False
        for i, c in enumerate(text):
            if is_special_token:
                if c == "<":
                    tail = sentence_split.pop(-1)
                    sentence_split[-1] += tail
                    sentence_split.append(c)
                    is_special_token = False
                elif c == ">":
                    # end of special token
                    sentence_split[-1] += c
                    if sentence_split[-1] == "<>":
                        continue
                    is_special_token = False
                    sentence_split.append("")
                else:
                    sentence_split[-1] += c
            else:
                if c == "<":
                    is_special_token = True
                    sentence_split.append(c)
                else:
                    sentence_split[-1] += c
        if is_special_token:
            tail = sentence_split.pop(-1)
            sentence_split[-1] += tail

        output_tokens = []
        for i, part in enumerate(sentence_split):
            if (i & 1) == 1:
                # special token
                output_tokens.append(part)
                if for_cpmbee and (part not in self.encoder) and (part not in self.ext_table_rev):
                    self.ext_table_rev[part] = len(self.ext_table_rev) + self.vocab_size
                    self.ext_table[self.ext_table_rev[part]] = part
            else:
                output_tokens.extend(self._tokenize(part, for_cpmbee=for_cpmbee))

        # drop spaces
        for i, token in enumerate(output_tokens):
            if token in self.added_tokens_encoder:
                token = all_special_tokens_extended.get(token, None)
                left = output_tokens[i - 1] if i > 0 else None
                right = output_tokens[i + 1] if i < len(output_tokens) - 1 else None
                if isinstance(token, AddedToken):
                    if token.rstrip and right:
                        # A bit counter-intuitive but we strip the left of the string
                        # since tok_extended.rstrip means the special token is eating all white spaces on its right
                        output_tokens[i + 1] = right.lstrip()
                    # Strip white spaces on the left
                    if token.lstrip and left:
                        output_tokens[i - 1] = left.rstrip()  # Opposite here
                else:
                    if right:
                        output_tokens[i + 1] = right.lstrip()
                    if left:
                        output_tokens[i - 1] = left.rstrip()

        skipped_tokens = []
        for token in output_tokens:
            if not token:
                continue
            skipped_tokens.append(token)

        return skipped_tokens

    def _tokenize(self, text, **kwargs):
        """
        Converts a string in a sequence of tokens (string), using the tokenizer. Split in words for word-based
        vocabulary.

        Do NOT take care of added tokens. Record the unk tokens and special tokens in `ext_table` and `ext_table_rev`.
        """
        for_cpmbee = kwargs.get("for_cpmbee", False)
        output_tokens = []

        part_st = 0
        last_unk = None
        while part_st < len(text):
            piece = self.get_piece(text[part_st:])
            if piece in self.encoder or self.added_tokens_encoder:
                if last_unk is None:
                    output_tokens.append(piece)
                else:
                    if for_cpmbee and (last_unk not in self.ext_table_rev):
                        self.ext_table_rev[last_unk] = len(self.ext_table_rev) + self.vocab_size
                        self.ext_table[self.ext_table_rev[last_unk]] = last_unk
                    output_tokens.append(last_unk)
                    output_tokens.append(piece)
                    last_unk = None
            else:
                if last_unk is None:
                    last_unk = piece
                else:
                    last_unk += piece
            part_st += len(piece)
        if last_unk is not None:
            # part end with UNK
            if for_cpmbee and (last_unk not in self.ext_table_rev):
                self.ext_table_rev[last_unk] = len(self.ext_table_rev) + self.vocab_size
                self.ext_table[self.ext_table_rev[last_unk]] = last_unk
            output_tokens.append(last_unk)

        return output_tokens

    def check(self, token):
        """
        Checks if a token is present in the encoder.

        Args:
            self (CpmBeeTokenizer): An instance of the CpmBeeTokenizer class.
            token (Any): The token to be checked in the encoder.

        Returns:
            None.

        Raises:
            None.
        """
        return token in self.encoder

    def convert_tokens_to_string(self, tokens: List[str]) -> str:
        """
        Converts a list of tokens into a single string.

        Args:
            self (CpmBeeTokenizer): An instance of the CpmBeeTokenizer class.
            tokens (List[str]): A list of tokens to be converted into a string.

        Returns:
            str: A string representation of the tokens.

        Raises:
            None.

        This method takes in two parameters, self and tokens. The self parameter is an instance of the CpmBeeTokenizer
        class and is used to access the class's attributes and methods. The tokens parameter is a
        list of strings representing individual tokens.

        The function returns a string that is obtained by concatenating all the tokens together using the ''.join() method.
        This method does not modify the original list of tokens.

        No exceptions are raised by this method.
        """
        return "".join(tokens)

    def _convert_token_to_id(self, token: str):
        """Converts a token (str) in an id using the vocab and ext_table."""
        if token in self.encoder:
            return self.encoder.get(token)
        elif token in self.ext_table_rev:
            return self.ext_table_rev[token]
        elif token in self.added_tokens_encoder:
            return self.added_tokens_encoder[token]
        else:
            return self.unk_token_id

    def _convert_id_to_token(self, index):
        """Converts an index (integer) in a token (str) using the vocab and ext_table."""
        if index in self.ext_table:
            return self.ext_table[index]
        elif index in self.added_tokens_decoder:
            return self.added_tokens_decoder[index]
        else:
            if index >= 0:
                return self.decoder[index]

    def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
        """
        Save the vocabulary to a file.

        Args:
            self (CpmBeeTokenizer): The instance of the CpmBeeTokenizer class.
            save_directory (str): The directory where the vocabulary file will be saved.
            filename_prefix (Optional[str]): An optional prefix to prepend to the filename. Default is None.

        Returns:
            Tuple[str]: A tuple containing the path to the saved vocabulary file.

        Raises:
            IOError: If there is an issue with reading or writing the vocabulary file.
            ValueError: If the provided save_directory is not a valid directory.
            KeyError: If any of the keys used for encoding tokens are not found in the encoder dictionary.
        """
        if os.path.isdir(save_directory):
            vocab_file = os.path.join(
                save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
            )
        else:
            vocab_file = (filename_prefix + "-" if filename_prefix else "") + save_directory
        index = 0
        self.encoder["</n>"] = self.encoder["\n"]
        del self.encoder["\n"]
        self.encoder["</_>"] = self.encoder[" "]
        del self.encoder[" "]
        with open(vocab_file, "w", encoding="utf-8") as writer:
            for token, token_index in sorted(self.encoder.items(), key=lambda x: x[1]):
                if index != token_index:
                    logger.warning(
                        f"Saving vocabulary to {vocab_file}: vocabulary indices are not consecutive."
                        " Please check that the vocabulary is not corrupted!"
                    )
                    index = token_index
                writer.write(token + "\n")
                index += 1
        return (vocab_file,)

    def __call__(self, text, *args, **kwargs):
        r"""
        CPMBee `call` method will use `_tokenize_cpmbee` when the input type is dict.
        """
        if isinstance(text, dict):
            return self._batch_tokenize_cpmbee([text], *args, **kwargs)
        elif isinstance(text, (list, tuple)):
            if isinstance(text[0], dict):
                return self._batch_tokenize_cpmbee(text, *args, **kwargs)
            else:
                return super().__call__(text, *args, **kwargs)
        else:
            return super().__call__(text, *args, **kwargs)

    # 分词
    def _tokenize_cpmbee(self, data: TextInput, *args, **kwargs) -> List[str]:
        """
        A tokenize method to process dict data. Exclusive for CPMBee.
        """
        if isinstance(data, str):
            data = json.loads(data)
        if not isinstance(data, Dict):
            raise TypeError(
                "CpmBeeTokenizer input data should be dict or str in dict format, but got {}".format(type(data))
            )

        # 1. prepare answer placeholder
        answer_placeholders = []

        def _put_placeholder(data: Any, path: List[str] = []):
            if isinstance(data, dict):
                ret = {}
                for k, v in data.items():
                    ret[k] = _put_placeholder(v, path + [k])
                return ret
            else:
                answer_placeholders.append(path)
                return "<ans_{}>".format(len(answer_placeholders))

        data["<ans>"] = _put_placeholder(data["<ans>"])

        (
            input_ids,
            input_id_subs,
            context,
            segment_ids,
            segment_rel,
            n_segments,
            table_states,
        ) = self.convert_data_to_id(data, shuffle_answer=False, max_depth=8)

        # <ans> mapping from sub to id
        sub_ans_map: Dict[int, int] = {}
        for fake_id, token_sub in table_states["token_id_table"]["<ans>"].items():
            token = table_states["ext_table"][fake_id]
            if token.startswith("<ans_") and token.endswith(">"):
                ans_id = int(token[5:-1])
                sub_ans_map[token_sub] = ans_id

        tmp_input_ids = []
        tmp_input_sub = []
        tmp_input_seg = []

        # get predict segments
        predict_segments: List[Tuple[int, int]] = []
        for i in range(input_ids.shape[0]):
            if context[i] == 0:
                if input_ids[i] == self.encoder["<ans>"]:
                    # is ans
                    # (segment_id, ans_id)
                    predict_segments.append((segment_ids[i], sub_ans_map[input_id_subs[i]]))
            else:
                tmp_input_ids.append(input_ids[i])
                tmp_input_sub.append(input_id_subs[i])
                tmp_input_seg.append(segment_ids[i])

        if len(predict_segments) == 0:
            raise ValueError("No answer to predict")

        input_ids = np.array(tmp_input_ids, dtype=np.int32)  # all context
        input_id_subs = np.array(tmp_input_sub, dtype=np.int32)  # [0, 0, 0, 0, 1, 0, 0, 2, 0, ...]
        context = np.full_like(tmp_input_ids, 1, dtype=np.int8)  # [1, 1, 1, ...]
        segment_ids = np.array(tmp_input_seg, dtype=np.int32)  # [0, 0, 0, 1, 1, 1, 2, 2, 2, 2, ...]
        sample_ids = np.zeros(input_ids.shape, dtype=np.int32)  # [0, 0, 0, 0, ...]
        segment_rel_offset = np.zeros(input_ids.shape, dtype=np.int32)  # [0, 0, 0, ...]
        num_segments = np.full(input_ids.shape, n_segments, dtype=np.int32)  # [n_seg, n_seg, n_seg, ...]
        input_pos = np.arange(input_ids.shape[0], dtype=np.int32)  # [0, 1, 2, 3, 4, ...]

        return (
            self.prepare_for_model(
                input_ids.tolist(),
                input_id_subs=input_id_subs.tolist(),
                input_pos=input_pos.tolist(),
                context=context.tolist(),
                segment_ids=segment_ids.tolist(),
                segment_rel_offset=segment_rel_offset.tolist(),
                segment_rel=segment_rel.tolist(),
                sample_ids=sample_ids.tolist(),
                num_segments=num_segments.tolist(),
                **kwargs,
            ),
            predict_segments,
            answer_placeholders,
            table_states["ext_table"],
            table_states["token_id_table"],
        )

    def _batch_tokenize_cpmbee(self, data_lst, *args, **kwargs):
        """
        Batched _token_cpmbee.
        """
        return_tensors = kwargs.get("return_tensors", None)
        batch_outputs = {}
        segment_rel_pack = []
        other_info = []

        batch_ext_table_map: Dict[Tuple[int, int], int] = {}
        batch_ext_table_ids: List[int] = []
        batch_ext_table_sub: List[int] = []

        for data in data_lst:
            self.ext_table = {}
            self.ext_table_rev = {}
            self.token_id_table = {}
            (outputs, predict_segments, answer_placeholders, ext_table, token_id_table) = self._tokenize_cpmbee(
                data,
                truncation=None,
                padding=PaddingStrategy.DO_NOT_PAD.value,
                max_length=None,
                pad_to_multiple_of=None,
                return_attention_mask=False,
                return_tensors=None,
            )
            rev_ext_table = {}
            for token, mp in token_id_table.items():
                if token == "<ans>":
                    continue
                token_id = self.encoder[token]
                for fake_id, token_sub in mp.items():
                    if token_sub > 0:
                        if (token_id, token_sub) not in batch_ext_table_map:
                            batch_ext_table_map[(token_id, token_sub)] = len(batch_ext_table_ids) + self.vocab_size
                            batch_ext_table_ids.append(token_id)
                            batch_ext_table_sub.append(token_sub)
                        rev_ext_table[batch_ext_table_map[(token_id, token_sub)]] = ext_table[fake_id]
                    else:
                        rev_ext_table[token_id] = ext_table[fake_id]

            segment_rel_pack.append(np.array(outputs.pop("segment_rel")))
            other_info.append(
                {
                    "predict_segments": predict_segments,
                    "answer_placeholders": answer_placeholders,
                    "ext_table": rev_ext_table,
                }
            )

            for key, value in outputs.items():
                if key not in batch_outputs:
                    batch_outputs[key] = []
                batch_outputs[key].append(value)

        max_length = max(len(item) for item in batch_outputs[self.model_input_names[0]])
        batch_size = len(batch_outputs[self.model_input_names[0]])
        for i in range(batch_size):
            inputs = {k: v[i] for k, v in batch_outputs.items()}

            for k, v in inputs.items():
                required_input = v

                needs_to_be_padded = len(required_input) != max_length

                if needs_to_be_padded:
                    difference = max_length - len(required_input)
                    batch_outputs[k][i] = [self.pad_token_id] * difference + required_input

        max_num_rels = 0
        for rel in segment_rel_pack:
            max_num_rels = max(max_num_rels, rel.shape[0])
        padded_rels = np.zeros((len(segment_rel_pack), max_num_rels), dtype=np.int32)
        for i, rel in enumerate(segment_rel_pack):
            padded_rels[i, : rel.shape[0]] = rel
        batch_outputs["segment_rel"] = padded_rels
        batch_outputs["batch_ext_table_ids"] = np.array(batch_ext_table_ids, dtype=np.int32)
        batch_outputs["batch_ext_table_sub"] = np.array(batch_ext_table_sub, dtype=np.int32)
        batch_outputs = BatchEncoding(batch_outputs, tensor_type=return_tensors)
        batch_outputs["other_info"] = other_info

        return batch_outputs

    def convert_data_to_id(
        self,
        data: Any,
        prev_ext_states: Optional[_PrevExtTableStates] = None,
        shuffle_answer: bool = True,
        max_depth: int = 8,
    ):
        """
        Parse a dict to data ids. Exclusive for CPMBee. It will

        1. parse the dict to segments and get segment_rel, which for calculating of position_bias.
        2. tokenize every segment.
        """
        root: _DictTree = {
            "value": "<root>",
            "children": [],
            "depth": 0,
            "segment_id": 0,
            "need_predict": False,
        }

        segments = [root]

        def _build_dict_tree(data: CPMBeeInputType, depth: int, need_predict: bool) -> List[_DictTree]:
            if isinstance(data, dict):
                ret_list: List[_DictTree] = []
                curr_items = list(data.items())
                if need_predict and shuffle_answer:
                    access_idx = np.arange(len(curr_items))
                    np.random.shuffle(access_idx)
                    curr_items = [curr_items[idx] for idx in access_idx]
                for k, v in curr_items:
                    child_info: _DictTree = {
                        "value": k,
                        "children": [],
                        "depth": depth,
                        "segment_id": len(segments),
                        "need_predict": False,  # only leaves are contexts
                    }
                    segments.append(child_info)
                    child_info["children"] = _build_dict_tree(
                        v, depth + 1, need_predict or (depth == 1 and k == "<ans>")
                    )  # elements in <root>.<ans>

                    ret_list.append(child_info)
                return ret_list
            else:
                assert isinstance(data, str), "Invalid data {}".format(data)
                ret: _DictTree = {
                    "value": data,
                    "children": [],
                    "depth": depth,
                    "segment_id": len(segments),
                    "need_predict": need_predict,
                }
                segments.append(ret)
                return [ret]

        root["children"] = _build_dict_tree(data, 1, False)

        num_segments = len(segments)
        segment_rel = np.zeros((num_segments * num_segments,), dtype=np.int32)

        def _build_segment_rel(node: _DictTree) -> List[Tuple[int, int]]:
            ret: List[Tuple[int, int]] = [(node["segment_id"], node["depth"])]
            for child in node["children"]:
                sub = _build_segment_rel(child)
                for seg_id_1, depth_1 in sub:
                    for seg_id_2, depth_2 in ret:
                        n_up = min(depth_1 - node["depth"], max_depth - 1)
                        n_down = min(depth_2 - node["depth"], max_depth - 1)
                        segment_rel[seg_id_1 * num_segments + seg_id_2] = rel_to_bucket(
                            n_up, n_down, max_depth=max_depth
                        )
                        segment_rel[seg_id_2 * num_segments + seg_id_1] = rel_to_bucket(
                            n_down, n_up, max_depth=max_depth
                        )
                ret.extend(sub)
            return ret

        _build_segment_rel(root)

        input_ids: List[int] = []
        input_id_subs: List[int] = []
        segment_bound: List[Tuple[int, int]] = []

        if prev_ext_states is not None:
            self.ext_table = prev_ext_states["ext_table"]
            self.token_id_table = prev_ext_states["token_id_table"]

        for seg in segments:
            # tokenize
            tokens = self.convert_tokens_to_ids(self.tokenize(seg["value"], for_cpmbee=True))

            token_id_subs = []
            reid_token_ids = []
            for idx in tokens:
                if idx in self.ext_table:
                    # unk or special token
                    token = self.ext_table[idx]
                    if token.startswith("<") and token.endswith(">"):
                        # special token
                        if "_" in token:
                            token_name = token[1:-1].split("_", maxsplit=1)[0]
                        else:
                            token_name = token[1:-1]
                        token_name = "<{}>".format(token_name)
                    else:
                        token_name = "<unk>"

                    if token_name not in self.token_id_table:
                        self.token_id_table[token_name] = {}
                    if idx not in self.token_id_table[token_name]:
                        self.token_id_table[token_name][idx] = len(self.token_id_table[token_name])
                    if token_name not in self.encoder:
                        raise ValueError("Invalid token {}".format(token))
                    reid_token_ids.append(self.encoder[token_name])
                    token_id_subs.append(self.token_id_table[token_name][idx])
                else:
                    reid_token_ids.append(idx)
                    token_id_subs.append(0)
            tokens = [self.bos_token_id] + reid_token_ids
            token_id_subs = [0] + token_id_subs
            # eos_id 表示 no need_predict
            if not seg["need_predict"]:  # eos
                tokens = tokens + [self.eos_token_id]
                token_id_subs = token_id_subs + [0]
            else:
                # no eos
                pass
            begin = len(input_ids)
            input_ids.extend(tokens)
            input_id_subs.extend(token_id_subs)
            end = len(input_ids)
            segment_bound.append((begin, end))

        ids = np.array(input_ids, dtype=np.int32)
        id_subs = np.array(input_id_subs, dtype=np.int32)
        segs = np.zeros((ids.shape[0],), dtype=np.int32)  # 按segment_bound对seg编号
        context = np.zeros((ids.shape[0],), dtype=np.int8)
        for i, (begin, end) in enumerate(segment_bound):
            if not segments[i]["need_predict"]:
                context[begin:end] = 1
            segs[begin:end] = i

        curr_ext_table_states: _PrevExtTableStates = {
            "ext_table": self.ext_table,
            "token_id_table": self.token_id_table,
        }
        return ids, id_subs, context, segs, segment_rel, num_segments, curr_ext_table_states

    def prepare_for_model(
        self,
        ids: List[int],
        pair_ids: Optional[List[int]] = None,
        add_special_tokens: bool = True,
        padding: Union[bool, str, PaddingStrategy] = False,
        truncation: Union[bool, str, TruncationStrategy] = None,
        max_length: Optional[int] = None,
        stride: int = 0,
        pad_to_multiple_of: Optional[int] = None,
        return_tensors: Optional[Union[str, TensorType]] = None,
        return_token_type_ids: Optional[bool] = None,
        return_attention_mask: Optional[bool] = None,
        return_overflowing_tokens: bool = False,
        return_special_tokens_mask: bool = False,
        return_length: bool = False,
        verbose: bool = True,
        prepend_batch_axis: bool = False,
        **kwargs,
    ) -> BatchEncoding:
        """
        Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It
        adds special tokens, truncates sequences if overflowing while taking into account the special tokens and
        manages a moving window (with user defined stride) for overflowing tokens. Please Note, for *pair_ids*
        different than `None` and *truncation_strategy = longest_first* or `True`, it is not possible to return
        overflowing tokens. Such a combination of arguments will raise an error.

        Args:
            ids (`List[int]`):
                Tokenized input ids of the first sequence. Can be obtained from a string by chaining the `tokenize` and
                `convert_tokens_to_ids` methods.
            pair_ids (`List[int]`, *optional*):
                Tokenized input ids of the second sequence. Can be obtained from a string by chaining the `tokenize`
                and `convert_tokens_to_ids` methods.
        """
        # Backward compatibility for 'truncation_strategy', 'pad_to_max_length'
        padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
            padding=padding,
            truncation=truncation,
            max_length=max_length,
            pad_to_multiple_of=pad_to_multiple_of,
            verbose=verbose,
            **kwargs,
        )

        pair = bool(pair_ids is not None)
        len_ids = len(ids)
        len_pair_ids = len(pair_ids) if pair else 0

        if return_token_type_ids and not add_special_tokens:
            raise ValueError(
                "Asking to return token_type_ids while setting add_special_tokens to False "
                "results in an undefined behavior. Please set add_special_tokens to True or "
                "set return_token_type_ids to None."
            )

        if (
            return_overflowing_tokens
            and truncation_strategy == TruncationStrategy.LONGEST_FIRST
            and pair_ids is not None
        ):
            raise ValueError(
                "Not possible to return overflowing tokens for pair of sequences with the "
                "`longest_first`. Please select another truncation strategy than `longest_first`, "
                "for instance `only_second` or `only_first`."
            )

        # Load from model defaults
        if return_token_type_ids is None:
            return_token_type_ids = "token_type_ids" in self.model_input_names
        if return_attention_mask is None:
            return_attention_mask = "attention_mask" in self.model_input_names

        encoded_inputs = {}

        # Compute the total size of the returned encodings
        total_len = len_ids + len_pair_ids + (self.num_special_tokens_to_add(pair=pair) if add_special_tokens else 0)

        # Truncation: Handle max sequence length
        overflowing_tokens = []
        if truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE and max_length and total_len > max_length:
            ids, pair_ids, overflowing_tokens = self.truncate_sequences(
                ids,
                pair_ids=pair_ids,
                num_tokens_to_remove=total_len - max_length,
                truncation_strategy=truncation_strategy,
                stride=stride,
            )

        if return_overflowing_tokens:
            encoded_inputs["overflowing_tokens"] = overflowing_tokens
            encoded_inputs["num_truncated_tokens"] = total_len - max_length

        # Add special tokens
        if add_special_tokens:
            sequence = self.build_inputs_with_special_tokens(ids, pair_ids)
            token_type_ids = self.create_token_type_ids_from_sequences(ids, pair_ids)
        else:
            sequence = ids + pair_ids if pair else ids
            token_type_ids = [0] * len(ids) + ([0] * len(pair_ids) if pair else [])

        # Build output dictionary
        encoded_inputs["input_ids"] = sequence
        if return_token_type_ids:
            encoded_inputs["token_type_ids"] = token_type_ids
        if return_special_tokens_mask:
            if add_special_tokens:
                encoded_inputs["special_tokens_mask"] = self.get_special_tokens_mask(ids, pair_ids)
            else:
                encoded_inputs["special_tokens_mask"] = [0] * len(sequence)

        # Check lengths
        self._eventual_warn_about_too_long_sequence(encoded_inputs["input_ids"], max_length, verbose)

        # Padding
        if padding_strategy != PaddingStrategy.DO_NOT_PAD or return_attention_mask:
            encoded_inputs = self.pad(
                encoded_inputs,
                max_length=max_length,
                padding=padding_strategy.value,
                pad_to_multiple_of=pad_to_multiple_of,
                return_attention_mask=return_attention_mask,
            )

        if return_length:
            encoded_inputs["length"] = len(encoded_inputs["input_ids"])

        # for CPMBee, encode all the model arguments
        for arg in self.ext_args_for_model:
            v = kwargs.get(arg, None)
            if v is not None:
                encoded_inputs[arg] = v

        batch_outputs = BatchEncoding(
            encoded_inputs, tensor_type=return_tensors, prepend_batch_axis=prepend_batch_axis
        )

        return batch_outputs

    def prepare_for_finetune(
        self,
        data_list: List[Dict],
        max_length: int = 2048
    ):
        """
        Prepares the input data for fine-tuning.

        Args:
            self (CpmBeeTokenizer): The instance of the CpmBeeTokenizer class.
            data_list (List[Dict]): A list of dictionaries containing the input data.
            max_length (int, optional): The maximum length of the input data. Defaults to 2048.

        Returns:
            None.

        Raises:
            None.
        """
        _inputs: List[NDArray[np.int32]] = []
        _inputs_sub: List[NDArray[np.int32]] = []
        _context: List[NDArray[np.int8]] = []
        _sample_ids: List[NDArray[np.int32]] = []
        _segments: List[NDArray[np.int32]] = []
        _num_segments: List[NDArray[np.int32]] = []
        _segment_rel_offset: List[NDArray[np.int32]] = []
        _segment_rel: List[NDArray[np.int32]] = []
        _spans: List[List[int]] = []
        _raw_data: List[List[Any]] = []

        raw_data = {}
        for data in data_list:
            (
                input_ids,
                input_id_subs,
                context,
                segment_ids,
                segment_rel,
                n_segments,
                _
            ) = self.convert_data_to_id(data)

            input_ids = input_ids[: max_length]
            context = context[: max_length]
            segment_ids = segment_ids[: max_length]
            raw_data["input"] = data
            raw_data["samples"] = []

            sample_ids = np.zeros(input_ids.shape, dtype=np.int32)
            segment_rel_offset = np.zeros(input_ids.shape, dtype=np.int32)
            num_segments = np.full(input_ids.shape, n_segments, dtype=np.int32)

            _inputs.append(input_ids)
            _inputs_sub.append(input_id_subs)
            _context.append(context)
            _sample_ids.append(sample_ids)
            _segments.append(segment_ids)
            _num_segments.append(num_segments)
            _segment_rel_offset.append(segment_rel_offset)
            _segment_rel.append(segment_rel)
            _spans.append([input_ids.shape[0]])
            _raw_data.append([raw_data])

        batch_size = len(_inputs)
        inputs = np.zeros((batch_size, max_length), dtype=np.int32)
        inputs_sub = np.zeros((batch_size, max_length), dtype=np.int32)
        context = np.zeros((batch_size, max_length), dtype=np.int8)
        sample_ids = np.zeros((batch_size, max_length), dtype=np.int32)
        segments = np.zeros((batch_size, max_length), dtype=np.int32)
        num_segments = np.zeros((batch_size, max_length), dtype=np.int32)
        segment_rel_offset = np.zeros((batch_size, max_length), dtype=np.int32)
        tgt = np.full((batch_size, max_length), -100, dtype=np.int32)

        max_rel = 0
        for i in range(batch_size):
            max_rel = max(max_rel, _segment_rel[i].shape[0])
        segment_rel = np.zeros((batch_size, max_rel), dtype=np.int32)
        spans = np.zeros((batch_size, max_length), dtype=np.int32)
        length = np.zeros((batch_size,), dtype=np.int32)

        batch_ext_table_map: Dict[Tuple[int, int], int] = {}
        batch_ext_table_ids: List[int] = []
        batch_ext_table_sub: List[int] = []
        raw_data_list: List[Any] = []

        for i in range(batch_size):
            instance_length = _inputs[i].shape[0]
            rel_size = _segment_rel[i].shape[0]
            inputs[i, :instance_length] = _inputs[i]
            inputs_sub[i, :instance_length] = _inputs_sub[i]
            context[i, :instance_length] = _context[i]
            sample_ids[i, :instance_length] = _sample_ids[i]
            segments[i, :instance_length] = _segments[i]
            num_segments[i, :instance_length] = _num_segments[i]
            segment_rel_offset[i, :instance_length] = _segment_rel_offset[i]
            segment_rel[i, :rel_size] = _segment_rel[i]

            span_begin = 0
            for span_id, span_end in enumerate(_spans[i]):
                spans[i, span_begin:span_end] = span_id
                span_begin = span_end
            length[i] = instance_length
            raw_data_list.extend(_raw_data[i])

            for j in range(instance_length):
                idx, idx_sub = _inputs[i][j], _inputs_sub[i][j]
                tgt_idx = idx
                if idx_sub > 0:
                    # need to be in ext table
                    if (idx, idx_sub) not in batch_ext_table_map:
                        batch_ext_table_map[(idx, idx_sub)] = len(batch_ext_table_map)
                        batch_ext_table_ids.append(idx)
                        batch_ext_table_sub.append(idx_sub)
                    tgt_idx = batch_ext_table_map[(idx, idx_sub)] + self.vocab_size
                if j > 1 and context[i, j - 1] == 0:
                    if idx != self.bos_token_id:
                        tgt[i, j - 1] = tgt_idx
                    else:
                        tgt[i, j - 1] = self.eos_token_id
            if context[i, instance_length - 1] == 0:
                tgt[i, instance_length - 1] = self.eos_token_id

        if len(batch_ext_table_map) == 0:
            # placeholder
            batch_ext_table_ids.append(0)
            batch_ext_table_sub.append(1)

        return BatchEncoding({
            "input_ids": inputs,
            "input_id_sub": inputs_sub,
            "length": length,
            "context": context > 0,
            "sample_ids": sample_ids,
            "num_segments": num_segments,
            "segment": segments,
            "segment_rel_offset": segment_rel_offset,
            "segment_rel": segment_rel,
            "span": spans,
            "labels": tgt,
            "ext_table_ids": np.array(batch_ext_table_ids, dtype=np.int32),
            "ext_table_sub": np.array(batch_ext_table_sub, dtype=np.int32)
        }, tensor_type="ms")

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.bod_token_id property

Returns the token ID for the beginning of document (BOD) token.

PARAMETER DESCRIPTION
self

An instance of the CpmBeeTokenizer class.

RETURNS DESCRIPTION
None

This method returns the token ID corresponding to the BOD token in the encoder dictionary.

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.eod_token_id property

Method to retrieve the token ID corresponding to the end-of-document token in the CpmBeeTokenizer class.

PARAMETER DESCRIPTION
self

An instance of the CpmBeeTokenizer class.

RETURNS DESCRIPTION
None

The method returns the token ID of the end-of-document token in the tokenizer's encoder.

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.newline_id property

Returns the ID of the newline token in the CpmBeeTokenizer.

PARAMETER DESCRIPTION
self

An instance of the CpmBeeTokenizer class.

TYPE: CpmBeeTokenizer

RETURNS DESCRIPTION

None.

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.vocab_size: int property

Returns the size of the vocabulary used by the CpmBeeTokenizer instance.

PARAMETER DESCRIPTION
self

The CpmBeeTokenizer instance.

  • This parameter is of type 'CpmBeeTokenizer'.
  • It represents the instance of the CpmBeeTokenizer class on which the method is called.

RETURNS DESCRIPTION
int

An integer representing the size of the vocabulary.

  • The returned value represents the total number of unique tokens in the vocabulary.

TYPE: int

Example
>>> tokenizer = CpmBeeTokenizer()
>>> tokenizer.vocab_size()
5000

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.__call__(text, *args, **kwargs)

CPMBee call method will use _tokenize_cpmbee when the input type is dict.

Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
637
638
639
640
641
642
643
644
645
646
647
648
649
def __call__(self, text, *args, **kwargs):
    r"""
    CPMBee `call` method will use `_tokenize_cpmbee` when the input type is dict.
    """
    if isinstance(text, dict):
        return self._batch_tokenize_cpmbee([text], *args, **kwargs)
    elif isinstance(text, (list, tuple)):
        if isinstance(text[0], dict):
            return self._batch_tokenize_cpmbee(text, *args, **kwargs)
        else:
            return super().__call__(text, *args, **kwargs)
    else:
        return super().__call__(text, *args, **kwargs)

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.__init__(vocab_file, bos_token='<s>', eos_token='</s>', line_token='\n', space_token=' ', unk_token='<unk>', mask_token='<mask>', pad_token='<pad>', padding_side='left', **kwargs)

Initialize a CpmBeeTokenizer object.

PARAMETER DESCRIPTION
vocab_file

The path to the file containing the vocabulary.

TYPE: str

bos_token

The beginning of sentence token.

TYPE: str DEFAULT: '<s>'

eos_token

The end of sentence token.

TYPE: str DEFAULT: '</s>'

line_token

The token used to represent a new line.

TYPE: str DEFAULT: '\n'

space_token

The token used to represent a space.

TYPE: str DEFAULT: ' '

unk_token

The token used to represent unknown words.

TYPE: str DEFAULT: '<unk>'

mask_token

The token used for masking.

TYPE: str DEFAULT: '<mask>'

pad_token

The token used for padding.

TYPE: str DEFAULT: '<pad>'

padding_side

The side to apply padding.

TYPE: str DEFAULT: 'left'

**kwargs

Additional keyword arguments.

DEFAULT: {}

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
FileNotFoundError

If the vocab_file does not exist.

TypeError

If any of the arguments are of incorrect type.

Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
def __init__(
    self,
    vocab_file,
    bos_token="<s>",
    eos_token="</s>",
    line_token="\n",
    space_token=" ",
    unk_token="<unk>",
    mask_token="<mask>",
    pad_token="<pad>",
    padding_side="left",
    **kwargs,
):
    """
    Initialize a CpmBeeTokenizer object.

    Args:
        vocab_file (str): The path to the file containing the vocabulary.
        bos_token (str, optional): The beginning of sentence token.
        eos_token (str, optional): The end of sentence token.
        line_token (str, optional): The token used to represent a new line.
        space_token (str, optional): The token used to represent a space.
        unk_token (str, optional): The token used to represent unknown words.
        mask_token (str, optional): The token used for masking.
        pad_token (str, optional): The token used for padding.
        padding_side (str, optional): The side to apply padding.
        **kwargs: Additional keyword arguments.

    Returns:
        None.

    Raises:
        FileNotFoundError: If the vocab_file does not exist.
        TypeError: If any of the arguments are of incorrect type.
    """
    self.encoder: Dict[str, int] = {}
    super().__init__(
        bos_token=bos_token,
        eos_token=eos_token,
        line_token=line_token,
        space_token=space_token,
        unk_token=unk_token,
        mask_token=mask_token,
        pad_token=pad_token,
        padding_side=padding_side,
        **kwargs,
    )

    with open(vocab_file, "r", encoding="utf-8") as reader:
        for token in reader.readlines():
            token = token.rstrip("\n")
            if len(token) == 0:
                continue
            self.encoder[token] = len(self.encoder)

    self.encoder[" "] = self.encoder["</_>"]
    self.encoder["\n"] = self.encoder["</n>"]
    del self.encoder["</_>"]
    del self.encoder["</n>"]

    self.decoder = {v: k for k, v in self.encoder.items()}

    self._max_word_len = max(len(x) for x in self.encoder.keys())
    self.cpmbee_special_tokens = {k: v for k, v in self.encoder.items() if k.startswith("<") and k.endswith(">")}

    self.ext_table: Dict[int, str] = {}
    self.ext_table_rev: Dict[str, int] = {}

    self.token_id_table: Dict[str, Dict[int, int]] = {}
    self.ext_special_tokens = []

    self.ext_args_for_model = [
        "input_id_subs",
        "input_pos",
        "context",
        "segment_ids",
        "segment_rel_offset",
        "segment_rel",
        "sample_ids",
        "num_segments",
        "predict_segments",
        "answer_placeholders",
        "ext_table",
        "token_id_table",
    ]

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.__len__()

Size of the full vocabulary with the added tokens.

Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
381
382
383
384
385
def __len__(self):
    """
    Size of the full vocabulary with the added tokens.
    """
    return self.vocab_size + len(self.added_tokens_encoder)

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.check(token)

Checks if a token is present in the encoder.

PARAMETER DESCRIPTION
self

An instance of the CpmBeeTokenizer class.

TYPE: CpmBeeTokenizer

token

The token to be checked in the encoder.

TYPE: Any

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
def check(self, token):
    """
    Checks if a token is present in the encoder.

    Args:
        self (CpmBeeTokenizer): An instance of the CpmBeeTokenizer class.
        token (Any): The token to be checked in the encoder.

    Returns:
        None.

    Raises:
        None.
    """
    return token in self.encoder

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.convert_data_to_id(data, prev_ext_states=None, shuffle_answer=True, max_depth=8)

Parse a dict to data ids. Exclusive for CPMBee. It will

  1. parse the dict to segments and get segment_rel, which for calculating of position_bias.
  2. tokenize every segment.
Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
def convert_data_to_id(
    self,
    data: Any,
    prev_ext_states: Optional[_PrevExtTableStates] = None,
    shuffle_answer: bool = True,
    max_depth: int = 8,
):
    """
    Parse a dict to data ids. Exclusive for CPMBee. It will

    1. parse the dict to segments and get segment_rel, which for calculating of position_bias.
    2. tokenize every segment.
    """
    root: _DictTree = {
        "value": "<root>",
        "children": [],
        "depth": 0,
        "segment_id": 0,
        "need_predict": False,
    }

    segments = [root]

    def _build_dict_tree(data: CPMBeeInputType, depth: int, need_predict: bool) -> List[_DictTree]:
        if isinstance(data, dict):
            ret_list: List[_DictTree] = []
            curr_items = list(data.items())
            if need_predict and shuffle_answer:
                access_idx = np.arange(len(curr_items))
                np.random.shuffle(access_idx)
                curr_items = [curr_items[idx] for idx in access_idx]
            for k, v in curr_items:
                child_info: _DictTree = {
                    "value": k,
                    "children": [],
                    "depth": depth,
                    "segment_id": len(segments),
                    "need_predict": False,  # only leaves are contexts
                }
                segments.append(child_info)
                child_info["children"] = _build_dict_tree(
                    v, depth + 1, need_predict or (depth == 1 and k == "<ans>")
                )  # elements in <root>.<ans>

                ret_list.append(child_info)
            return ret_list
        else:
            assert isinstance(data, str), "Invalid data {}".format(data)
            ret: _DictTree = {
                "value": data,
                "children": [],
                "depth": depth,
                "segment_id": len(segments),
                "need_predict": need_predict,
            }
            segments.append(ret)
            return [ret]

    root["children"] = _build_dict_tree(data, 1, False)

    num_segments = len(segments)
    segment_rel = np.zeros((num_segments * num_segments,), dtype=np.int32)

    def _build_segment_rel(node: _DictTree) -> List[Tuple[int, int]]:
        ret: List[Tuple[int, int]] = [(node["segment_id"], node["depth"])]
        for child in node["children"]:
            sub = _build_segment_rel(child)
            for seg_id_1, depth_1 in sub:
                for seg_id_2, depth_2 in ret:
                    n_up = min(depth_1 - node["depth"], max_depth - 1)
                    n_down = min(depth_2 - node["depth"], max_depth - 1)
                    segment_rel[seg_id_1 * num_segments + seg_id_2] = rel_to_bucket(
                        n_up, n_down, max_depth=max_depth
                    )
                    segment_rel[seg_id_2 * num_segments + seg_id_1] = rel_to_bucket(
                        n_down, n_up, max_depth=max_depth
                    )
            ret.extend(sub)
        return ret

    _build_segment_rel(root)

    input_ids: List[int] = []
    input_id_subs: List[int] = []
    segment_bound: List[Tuple[int, int]] = []

    if prev_ext_states is not None:
        self.ext_table = prev_ext_states["ext_table"]
        self.token_id_table = prev_ext_states["token_id_table"]

    for seg in segments:
        # tokenize
        tokens = self.convert_tokens_to_ids(self.tokenize(seg["value"], for_cpmbee=True))

        token_id_subs = []
        reid_token_ids = []
        for idx in tokens:
            if idx in self.ext_table:
                # unk or special token
                token = self.ext_table[idx]
                if token.startswith("<") and token.endswith(">"):
                    # special token
                    if "_" in token:
                        token_name = token[1:-1].split("_", maxsplit=1)[0]
                    else:
                        token_name = token[1:-1]
                    token_name = "<{}>".format(token_name)
                else:
                    token_name = "<unk>"

                if token_name not in self.token_id_table:
                    self.token_id_table[token_name] = {}
                if idx not in self.token_id_table[token_name]:
                    self.token_id_table[token_name][idx] = len(self.token_id_table[token_name])
                if token_name not in self.encoder:
                    raise ValueError("Invalid token {}".format(token))
                reid_token_ids.append(self.encoder[token_name])
                token_id_subs.append(self.token_id_table[token_name][idx])
            else:
                reid_token_ids.append(idx)
                token_id_subs.append(0)
        tokens = [self.bos_token_id] + reid_token_ids
        token_id_subs = [0] + token_id_subs
        # eos_id 表示 no need_predict
        if not seg["need_predict"]:  # eos
            tokens = tokens + [self.eos_token_id]
            token_id_subs = token_id_subs + [0]
        else:
            # no eos
            pass
        begin = len(input_ids)
        input_ids.extend(tokens)
        input_id_subs.extend(token_id_subs)
        end = len(input_ids)
        segment_bound.append((begin, end))

    ids = np.array(input_ids, dtype=np.int32)
    id_subs = np.array(input_id_subs, dtype=np.int32)
    segs = np.zeros((ids.shape[0],), dtype=np.int32)  # 按segment_bound对seg编号
    context = np.zeros((ids.shape[0],), dtype=np.int8)
    for i, (begin, end) in enumerate(segment_bound):
        if not segments[i]["need_predict"]:
            context[begin:end] = 1
        segs[begin:end] = i

    curr_ext_table_states: _PrevExtTableStates = {
        "ext_table": self.ext_table,
        "token_id_table": self.token_id_table,
    }
    return ids, id_subs, context, segs, segment_rel, num_segments, curr_ext_table_states

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.convert_tokens_to_string(tokens)

Converts a list of tokens into a single string.

PARAMETER DESCRIPTION
self

An instance of the CpmBeeTokenizer class.

TYPE: CpmBeeTokenizer

tokens

A list of tokens to be converted into a string.

TYPE: List[str]

RETURNS DESCRIPTION
str

A string representation of the tokens.

TYPE: str

This method takes in two parameters, self and tokens. The self parameter is an instance of the CpmBeeTokenizer class and is used to access the class's attributes and methods. The tokens parameter is a list of strings representing individual tokens.

The function returns a string that is obtained by concatenating all the tokens together using the ''.join() method. This method does not modify the original list of tokens.

No exceptions are raised by this method.

Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
def convert_tokens_to_string(self, tokens: List[str]) -> str:
    """
    Converts a list of tokens into a single string.

    Args:
        self (CpmBeeTokenizer): An instance of the CpmBeeTokenizer class.
        tokens (List[str]): A list of tokens to be converted into a string.

    Returns:
        str: A string representation of the tokens.

    Raises:
        None.

    This method takes in two parameters, self and tokens. The self parameter is an instance of the CpmBeeTokenizer
    class and is used to access the class's attributes and methods. The tokens parameter is a
    list of strings representing individual tokens.

    The function returns a string that is obtained by concatenating all the tokens together using the ''.join() method.
    This method does not modify the original list of tokens.

    No exceptions are raised by this method.
    """
    return "".join(tokens)

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.get_piece(text)

Match with maximum length.

Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
404
405
406
407
408
409
410
411
412
413
def get_piece(self, text: str) -> str:
    """
    Match with maximum length.
    """
    len_text = len(text)
    for i in range(len(text)):
        sub = text[: len_text - i]
        if (sub in self.encoder) or (sub in self.added_tokens_encoder):
            return sub
    return text[0]

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.get_vocab()

Get the vocabulary of the CpmBeeTokenizer instance.

PARAMETER DESCRIPTION
self

The instance of the CpmBeeTokenizer class. This parameter represents the current instance of the tokenizer.

TYPE: CpmBeeTokenizer

RETURNS DESCRIPTION
dict

A dictionary containing the combined encoder and added tokens encoder. The keys represent tokens, and the values represent their corresponding IDs.

Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
def get_vocab(self):
    """
    Get the vocabulary of the CpmBeeTokenizer instance.

    Args:
        self (CpmBeeTokenizer): The instance of the CpmBeeTokenizer class.
            This parameter represents the current instance of the tokenizer.

    Returns:
        dict: A dictionary containing the combined encoder and added tokens encoder.
            The keys represent tokens, and the values represent their corresponding IDs.

    Raises:
        None.
    """
    return dict(self.encoder, **self.added_tokens_encoder)

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.prepare_for_finetune(data_list, max_length=2048)

Prepares the input data for fine-tuning.

PARAMETER DESCRIPTION
self

The instance of the CpmBeeTokenizer class.

TYPE: CpmBeeTokenizer

data_list

A list of dictionaries containing the input data.

TYPE: List[Dict]

max_length

The maximum length of the input data. Defaults to 2048.

TYPE: int DEFAULT: 2048

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
def prepare_for_finetune(
    self,
    data_list: List[Dict],
    max_length: int = 2048
):
    """
    Prepares the input data for fine-tuning.

    Args:
        self (CpmBeeTokenizer): The instance of the CpmBeeTokenizer class.
        data_list (List[Dict]): A list of dictionaries containing the input data.
        max_length (int, optional): The maximum length of the input data. Defaults to 2048.

    Returns:
        None.

    Raises:
        None.
    """
    _inputs: List[NDArray[np.int32]] = []
    _inputs_sub: List[NDArray[np.int32]] = []
    _context: List[NDArray[np.int8]] = []
    _sample_ids: List[NDArray[np.int32]] = []
    _segments: List[NDArray[np.int32]] = []
    _num_segments: List[NDArray[np.int32]] = []
    _segment_rel_offset: List[NDArray[np.int32]] = []
    _segment_rel: List[NDArray[np.int32]] = []
    _spans: List[List[int]] = []
    _raw_data: List[List[Any]] = []

    raw_data = {}
    for data in data_list:
        (
            input_ids,
            input_id_subs,
            context,
            segment_ids,
            segment_rel,
            n_segments,
            _
        ) = self.convert_data_to_id(data)

        input_ids = input_ids[: max_length]
        context = context[: max_length]
        segment_ids = segment_ids[: max_length]
        raw_data["input"] = data
        raw_data["samples"] = []

        sample_ids = np.zeros(input_ids.shape, dtype=np.int32)
        segment_rel_offset = np.zeros(input_ids.shape, dtype=np.int32)
        num_segments = np.full(input_ids.shape, n_segments, dtype=np.int32)

        _inputs.append(input_ids)
        _inputs_sub.append(input_id_subs)
        _context.append(context)
        _sample_ids.append(sample_ids)
        _segments.append(segment_ids)
        _num_segments.append(num_segments)
        _segment_rel_offset.append(segment_rel_offset)
        _segment_rel.append(segment_rel)
        _spans.append([input_ids.shape[0]])
        _raw_data.append([raw_data])

    batch_size = len(_inputs)
    inputs = np.zeros((batch_size, max_length), dtype=np.int32)
    inputs_sub = np.zeros((batch_size, max_length), dtype=np.int32)
    context = np.zeros((batch_size, max_length), dtype=np.int8)
    sample_ids = np.zeros((batch_size, max_length), dtype=np.int32)
    segments = np.zeros((batch_size, max_length), dtype=np.int32)
    num_segments = np.zeros((batch_size, max_length), dtype=np.int32)
    segment_rel_offset = np.zeros((batch_size, max_length), dtype=np.int32)
    tgt = np.full((batch_size, max_length), -100, dtype=np.int32)

    max_rel = 0
    for i in range(batch_size):
        max_rel = max(max_rel, _segment_rel[i].shape[0])
    segment_rel = np.zeros((batch_size, max_rel), dtype=np.int32)
    spans = np.zeros((batch_size, max_length), dtype=np.int32)
    length = np.zeros((batch_size,), dtype=np.int32)

    batch_ext_table_map: Dict[Tuple[int, int], int] = {}
    batch_ext_table_ids: List[int] = []
    batch_ext_table_sub: List[int] = []
    raw_data_list: List[Any] = []

    for i in range(batch_size):
        instance_length = _inputs[i].shape[0]
        rel_size = _segment_rel[i].shape[0]
        inputs[i, :instance_length] = _inputs[i]
        inputs_sub[i, :instance_length] = _inputs_sub[i]
        context[i, :instance_length] = _context[i]
        sample_ids[i, :instance_length] = _sample_ids[i]
        segments[i, :instance_length] = _segments[i]
        num_segments[i, :instance_length] = _num_segments[i]
        segment_rel_offset[i, :instance_length] = _segment_rel_offset[i]
        segment_rel[i, :rel_size] = _segment_rel[i]

        span_begin = 0
        for span_id, span_end in enumerate(_spans[i]):
            spans[i, span_begin:span_end] = span_id
            span_begin = span_end
        length[i] = instance_length
        raw_data_list.extend(_raw_data[i])

        for j in range(instance_length):
            idx, idx_sub = _inputs[i][j], _inputs_sub[i][j]
            tgt_idx = idx
            if idx_sub > 0:
                # need to be in ext table
                if (idx, idx_sub) not in batch_ext_table_map:
                    batch_ext_table_map[(idx, idx_sub)] = len(batch_ext_table_map)
                    batch_ext_table_ids.append(idx)
                    batch_ext_table_sub.append(idx_sub)
                tgt_idx = batch_ext_table_map[(idx, idx_sub)] + self.vocab_size
            if j > 1 and context[i, j - 1] == 0:
                if idx != self.bos_token_id:
                    tgt[i, j - 1] = tgt_idx
                else:
                    tgt[i, j - 1] = self.eos_token_id
        if context[i, instance_length - 1] == 0:
            tgt[i, instance_length - 1] = self.eos_token_id

    if len(batch_ext_table_map) == 0:
        # placeholder
        batch_ext_table_ids.append(0)
        batch_ext_table_sub.append(1)

    return BatchEncoding({
        "input_ids": inputs,
        "input_id_sub": inputs_sub,
        "length": length,
        "context": context > 0,
        "sample_ids": sample_ids,
        "num_segments": num_segments,
        "segment": segments,
        "segment_rel_offset": segment_rel_offset,
        "segment_rel": segment_rel,
        "span": spans,
        "labels": tgt,
        "ext_table_ids": np.array(batch_ext_table_ids, dtype=np.int32),
        "ext_table_sub": np.array(batch_ext_table_sub, dtype=np.int32)
    }, tensor_type="ms")

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.prepare_for_model(ids, pair_ids=None, add_special_tokens=True, padding=False, truncation=None, max_length=None, stride=0, pad_to_multiple_of=None, return_tensors=None, return_token_type_ids=None, return_attention_mask=None, return_overflowing_tokens=False, return_special_tokens_mask=False, return_length=False, verbose=True, prepend_batch_axis=False, **kwargs)

Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and manages a moving window (with user defined stride) for overflowing tokens. Please Note, for pair_ids different than None and truncation_strategy = longest_first or True, it is not possible to return overflowing tokens. Such a combination of arguments will raise an error.

PARAMETER DESCRIPTION
ids

Tokenized input ids of the first sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

TYPE: `List[int]`

pair_ids

Tokenized input ids of the second sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

TYPE: `List[int]`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
def prepare_for_model(
    self,
    ids: List[int],
    pair_ids: Optional[List[int]] = None,
    add_special_tokens: bool = True,
    padding: Union[bool, str, PaddingStrategy] = False,
    truncation: Union[bool, str, TruncationStrategy] = None,
    max_length: Optional[int] = None,
    stride: int = 0,
    pad_to_multiple_of: Optional[int] = None,
    return_tensors: Optional[Union[str, TensorType]] = None,
    return_token_type_ids: Optional[bool] = None,
    return_attention_mask: Optional[bool] = None,
    return_overflowing_tokens: bool = False,
    return_special_tokens_mask: bool = False,
    return_length: bool = False,
    verbose: bool = True,
    prepend_batch_axis: bool = False,
    **kwargs,
) -> BatchEncoding:
    """
    Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It
    adds special tokens, truncates sequences if overflowing while taking into account the special tokens and
    manages a moving window (with user defined stride) for overflowing tokens. Please Note, for *pair_ids*
    different than `None` and *truncation_strategy = longest_first* or `True`, it is not possible to return
    overflowing tokens. Such a combination of arguments will raise an error.

    Args:
        ids (`List[int]`):
            Tokenized input ids of the first sequence. Can be obtained from a string by chaining the `tokenize` and
            `convert_tokens_to_ids` methods.
        pair_ids (`List[int]`, *optional*):
            Tokenized input ids of the second sequence. Can be obtained from a string by chaining the `tokenize`
            and `convert_tokens_to_ids` methods.
    """
    # Backward compatibility for 'truncation_strategy', 'pad_to_max_length'
    padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
        padding=padding,
        truncation=truncation,
        max_length=max_length,
        pad_to_multiple_of=pad_to_multiple_of,
        verbose=verbose,
        **kwargs,
    )

    pair = bool(pair_ids is not None)
    len_ids = len(ids)
    len_pair_ids = len(pair_ids) if pair else 0

    if return_token_type_ids and not add_special_tokens:
        raise ValueError(
            "Asking to return token_type_ids while setting add_special_tokens to False "
            "results in an undefined behavior. Please set add_special_tokens to True or "
            "set return_token_type_ids to None."
        )

    if (
        return_overflowing_tokens
        and truncation_strategy == TruncationStrategy.LONGEST_FIRST
        and pair_ids is not None
    ):
        raise ValueError(
            "Not possible to return overflowing tokens for pair of sequences with the "
            "`longest_first`. Please select another truncation strategy than `longest_first`, "
            "for instance `only_second` or `only_first`."
        )

    # Load from model defaults
    if return_token_type_ids is None:
        return_token_type_ids = "token_type_ids" in self.model_input_names
    if return_attention_mask is None:
        return_attention_mask = "attention_mask" in self.model_input_names

    encoded_inputs = {}

    # Compute the total size of the returned encodings
    total_len = len_ids + len_pair_ids + (self.num_special_tokens_to_add(pair=pair) if add_special_tokens else 0)

    # Truncation: Handle max sequence length
    overflowing_tokens = []
    if truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE and max_length and total_len > max_length:
        ids, pair_ids, overflowing_tokens = self.truncate_sequences(
            ids,
            pair_ids=pair_ids,
            num_tokens_to_remove=total_len - max_length,
            truncation_strategy=truncation_strategy,
            stride=stride,
        )

    if return_overflowing_tokens:
        encoded_inputs["overflowing_tokens"] = overflowing_tokens
        encoded_inputs["num_truncated_tokens"] = total_len - max_length

    # Add special tokens
    if add_special_tokens:
        sequence = self.build_inputs_with_special_tokens(ids, pair_ids)
        token_type_ids = self.create_token_type_ids_from_sequences(ids, pair_ids)
    else:
        sequence = ids + pair_ids if pair else ids
        token_type_ids = [0] * len(ids) + ([0] * len(pair_ids) if pair else [])

    # Build output dictionary
    encoded_inputs["input_ids"] = sequence
    if return_token_type_ids:
        encoded_inputs["token_type_ids"] = token_type_ids
    if return_special_tokens_mask:
        if add_special_tokens:
            encoded_inputs["special_tokens_mask"] = self.get_special_tokens_mask(ids, pair_ids)
        else:
            encoded_inputs["special_tokens_mask"] = [0] * len(sequence)

    # Check lengths
    self._eventual_warn_about_too_long_sequence(encoded_inputs["input_ids"], max_length, verbose)

    # Padding
    if padding_strategy != PaddingStrategy.DO_NOT_PAD or return_attention_mask:
        encoded_inputs = self.pad(
            encoded_inputs,
            max_length=max_length,
            padding=padding_strategy.value,
            pad_to_multiple_of=pad_to_multiple_of,
            return_attention_mask=return_attention_mask,
        )

    if return_length:
        encoded_inputs["length"] = len(encoded_inputs["input_ids"])

    # for CPMBee, encode all the model arguments
    for arg in self.ext_args_for_model:
        v = kwargs.get(arg, None)
        if v is not None:
            encoded_inputs[arg] = v

    batch_outputs = BatchEncoding(
        encoded_inputs, tensor_type=return_tensors, prepend_batch_axis=prepend_batch_axis
    )

    return batch_outputs

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.save_vocabulary(save_directory, filename_prefix=None)

Save the vocabulary to a file.

PARAMETER DESCRIPTION
self

The instance of the CpmBeeTokenizer class.

TYPE: CpmBeeTokenizer

save_directory

The directory where the vocabulary file will be saved.

TYPE: str

filename_prefix

An optional prefix to prepend to the filename. Default is None.

TYPE: Optional[str] DEFAULT: None

RETURNS DESCRIPTION
Tuple[str]

Tuple[str]: A tuple containing the path to the saved vocabulary file.

RAISES DESCRIPTION
IOError

If there is an issue with reading or writing the vocabulary file.

ValueError

If the provided save_directory is not a valid directory.

KeyError

If any of the keys used for encoding tokens are not found in the encoder dictionary.

Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
    """
    Save the vocabulary to a file.

    Args:
        self (CpmBeeTokenizer): The instance of the CpmBeeTokenizer class.
        save_directory (str): The directory where the vocabulary file will be saved.
        filename_prefix (Optional[str]): An optional prefix to prepend to the filename. Default is None.

    Returns:
        Tuple[str]: A tuple containing the path to the saved vocabulary file.

    Raises:
        IOError: If there is an issue with reading or writing the vocabulary file.
        ValueError: If the provided save_directory is not a valid directory.
        KeyError: If any of the keys used for encoding tokens are not found in the encoder dictionary.
    """
    if os.path.isdir(save_directory):
        vocab_file = os.path.join(
            save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
        )
    else:
        vocab_file = (filename_prefix + "-" if filename_prefix else "") + save_directory
    index = 0
    self.encoder["</n>"] = self.encoder["\n"]
    del self.encoder["\n"]
    self.encoder["</_>"] = self.encoder[" "]
    del self.encoder[" "]
    with open(vocab_file, "w", encoding="utf-8") as writer:
        for token, token_index in sorted(self.encoder.items(), key=lambda x: x[1]):
            if index != token_index:
                logger.warning(
                    f"Saving vocabulary to {vocab_file}: vocabulary indices are not consecutive."
                    " Please check that the vocabulary is not corrupted!"
                )
                index = token_index
            writer.write(token + "\n")
            index += 1
    return (vocab_file,)

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.CpmBeeTokenizer.tokenize(text, **kwargs)

Override the tokenize to meet the needs of CPMBee:

  1. Mark the special token with < and >. The <> will be ignored.
  2. Split sentences by the marked special tokens.
  3. Record the marked special token by ext_table and ext_table_rev.
  4. Tokenize the sentence without special tokens.
Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
def tokenize(self, text: TextInput, **kwargs) -> List[str]:
    r"""
    Override the `tokenize` to meet the needs of CPMBee:

    1. Mark the special token with `<` and `>`. The `<>` will be ignored.
    2. Split sentences by the marked special tokens.
    3. Record the marked special token by `ext_table` and `ext_table_rev`.
    4. Tokenize the sentence without special tokens.
    """
    for_cpmbee = kwargs.get("for_cpmbee", False)
    all_special_tokens_extended = {
        str(t): t for t in self.all_special_tokens_extended if isinstance(t, AddedToken)
    }

    sentence_split = [""]
    is_special_token = False
    for i, c in enumerate(text):
        if is_special_token:
            if c == "<":
                tail = sentence_split.pop(-1)
                sentence_split[-1] += tail
                sentence_split.append(c)
                is_special_token = False
            elif c == ">":
                # end of special token
                sentence_split[-1] += c
                if sentence_split[-1] == "<>":
                    continue
                is_special_token = False
                sentence_split.append("")
            else:
                sentence_split[-1] += c
        else:
            if c == "<":
                is_special_token = True
                sentence_split.append(c)
            else:
                sentence_split[-1] += c
    if is_special_token:
        tail = sentence_split.pop(-1)
        sentence_split[-1] += tail

    output_tokens = []
    for i, part in enumerate(sentence_split):
        if (i & 1) == 1:
            # special token
            output_tokens.append(part)
            if for_cpmbee and (part not in self.encoder) and (part not in self.ext_table_rev):
                self.ext_table_rev[part] = len(self.ext_table_rev) + self.vocab_size
                self.ext_table[self.ext_table_rev[part]] = part
        else:
            output_tokens.extend(self._tokenize(part, for_cpmbee=for_cpmbee))

    # drop spaces
    for i, token in enumerate(output_tokens):
        if token in self.added_tokens_encoder:
            token = all_special_tokens_extended.get(token, None)
            left = output_tokens[i - 1] if i > 0 else None
            right = output_tokens[i + 1] if i < len(output_tokens) - 1 else None
            if isinstance(token, AddedToken):
                if token.rstrip and right:
                    # A bit counter-intuitive but we strip the left of the string
                    # since tok_extended.rstrip means the special token is eating all white spaces on its right
                    output_tokens[i + 1] = right.lstrip()
                # Strip white spaces on the left
                if token.lstrip and left:
                    output_tokens[i - 1] = left.rstrip()  # Opposite here
            else:
                if right:
                    output_tokens[i + 1] = right.lstrip()
                if left:
                    output_tokens[i - 1] = left.rstrip()

    skipped_tokens = []
    for token in output_tokens:
        if not token:
            continue
        skipped_tokens.append(token)

    return skipped_tokens

mindnlp.transformers.models.cpmbee.tokenization_cpmbee.rel_to_bucket(n_up, n_down, max_depth=8)

Calculates the relative position of an item in a bucket based on the number of items above and below it.

PARAMETER DESCRIPTION
n_up

The number of items above the item.

TYPE: int

n_down

The number of items below the item.

TYPE: int

max_depth

The maximum depth of the bucket. Defaults to 8.

TYPE: int DEFAULT: 8

RETURNS DESCRIPTION
int

The relative position of the item in the bucket.

Source code in mindnlp/transformers/models/cpmbee/tokenization_cpmbee.py
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
def rel_to_bucket(n_up: int, n_down: int, max_depth: int = 8):
    """
    Calculates the relative position of an item in a bucket based on the number of items above and below it.

    Args:
        n_up (int): The number of items above the item.
        n_down (int): The number of items below the item.
        max_depth (int, optional): The maximum depth of the bucket. Defaults to 8.

    Returns:
        int: The relative position of the item in the bucket.

    Raises:
        None.

    """
    ret = n_up * max_depth + n_down
    if ret == 0:
        return ret
    else:
        # bucket 1 is reserved for incontext samples
        return ret + 1

mindnlp.transformers.models.cpmbee.modeling_cpmbee

MindSpore CpmBee model.

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeAttention

Bases: Module

This class represents the attention mechanism used in the CpmBee model. It inherits from the nn.Module class.

ATTRIBUTE DESCRIPTION
dim_model

The hidden size of the model.

TYPE: int

num_heads

The number of attention heads.

TYPE: int

dim_head

The dimension of each attention head.

TYPE: int

project_q

Linear layer for projecting the query.

TYPE: CpmBeeLinear

project_k

Linear layer for projecting the key.

TYPE: CpmBeeLinear

project_v

Linear layer for projecting the value.

TYPE: CpmBeeLinear

attention_out

Linear layer for the output of the attention mechanism.

TYPE: CpmBeeLinear

softmax

Softmax function for computing attention weights.

TYPE: Softmax

dropout

Dropout layer for regularization (optional).

TYPE: Dropout or None

METHOD DESCRIPTION
__init__

Initializes the CpmBeeAttention class.

forward

Constructs the attention mechanism.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
class CpmBeeAttention(nn.Module):

    """
    This class represents the attention mechanism used in the CpmBee model. It inherits from the nn.Module class.

    Attributes:
        dim_model (int): The hidden size of the model.
        num_heads (int): The number of attention heads.
        dim_head (int): The dimension of each attention head.
        project_q (CpmBeeLinear): Linear layer for projecting the query.
        project_k (CpmBeeLinear): Linear layer for projecting the key.
        project_v (CpmBeeLinear): Linear layer for projecting the value.
        attention_out (CpmBeeLinear): Linear layer for the output of the attention mechanism.
        softmax (nn.Softmax): Softmax function for computing attention weights.
        dropout (nn.Dropout or None): Dropout layer for regularization (optional).

    Methods:
        __init__:
            Initializes the CpmBeeAttention class.

        forward:
            Constructs the attention mechanism.
    """
    def __init__(self, config: CpmBeeConfig):
        """
        Initializes an instance of the CpmBeeAttention class.

        Args:
            self: The instance of the class.
            config (CpmBeeConfig):
                The configuration object containing the following attributes:

                - hidden_size (int): The dimension of the model.
                - num_attention_heads (int): The number of attention heads.
                - dim_head (int): The dimension of each attention head.
                - ms_dtype: The data type used for the linear layers.
                - dropout_p (float, optional): The probability of an element to be zeroed during dropout.
                If not provided, no dropout is applied.

        Returns:
            None

        Raises:
            None
        """
        super().__init__()
        self.dim_model = config.hidden_size
        self.num_heads = config.num_attention_heads
        self.dim_head = config.dim_head

        self.project_q = CpmBeeLinear(self.dim_model, self.num_heads * self.dim_head, dtype=config.ms_dtype)
        self.project_k = CpmBeeLinear(self.dim_model, self.num_heads * self.dim_head, dtype=config.ms_dtype)
        self.project_v = CpmBeeLinear(self.dim_model, self.num_heads * self.dim_head, dtype=config.ms_dtype)

        self.attention_out = CpmBeeLinear(self.num_heads * self.dim_head, self.dim_model, dtype=config.ms_dtype)

        self.softmax = nn.Softmax(axis=-1)

        if config.dropout_p is not None:
            self.dropout = nn.Dropout(p=config.dropout_p)
        else:
            self.dropout = None

    def forward(
        self,
        hidden_q: mindspore.Tensor,
        hidden_kv: mindspore.Tensor,
        attention_mask: mindspore.Tensor,
        position_bias: mindspore.Tensor,
        output_attentions: Optional[bool] = False,
        past_key_values: Optional[Tuple[mindspore.Tensor, mindspore.Tensor]] = None,
        use_cache: Optional[bool] = None,
    ):
        """
        Args:
            hidden_q (`mindspore.Tensor`):
                Input of transformer block(self-attention block). It can be the raw embedding of a batch of sequences.
            hidden_kv (`mindspore.Tensor` of shape `(batch, len_k, dim_model)`)):
                Tensor *key_value* and *query* of shape `(batch, len_k, dim_model)`
            attention_mask (`mindspore.Tensor` of shape `(batch, len_seq, len_seq)`):
                Avoid invalid areas to participate in the calculation of self-attention.
            position_bias (`mindspore.Tensor` of shape `(batch, len_seq, len_seq)`):
                Provide positional information to self-attention block.
            output_attentions (`bool`, *optional*):
                Whether or not to return the attentions tensors of all attention layers.
            past_key_values (`Tuple[mindspore.Tensor, mindspore.Tensor]`, *optional*):
                Cached past key and value projection states.
            use_cache (`bool`, *optional*):
                If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
                (see `past_key_values`).
        """
        batch_size = hidden_q.shape[0]
        len_q = hidden_q.shape[1]
        len_k = hidden_kv.shape[1]

        query = self.project_q(hidden_q)
        key = self.project_k(hidden_kv)
        value = self.project_v(hidden_kv)

        query = query.view(batch_size, len_q, self.num_heads, self.dim_head).permute(0, 2, 1, 3)
        key = key.view(batch_size, len_k, self.num_heads, self.dim_head).permute(0, 2, 1, 3)
        value = value.view(batch_size, len_k, self.num_heads, self.dim_head).permute(0, 2, 1, 3)

        if past_key_values is not None:
            key = ops.cat([past_key_values[0], key], axis=-2)
            value = ops.cat([past_key_values[1], value], axis=-2)
            len_k = key.shape[-2]

        # (batch_size, num_heads, len_q, dim_head) @ (batch_size, num_heads, dim_head, len_k) -> (batch_size, num_heads, len_q, len_k)
        score = ops.matmul(query, key.swapaxes(-1, -2)) / math.sqrt(self.dim_head)
        score = score + position_bias

        score = ops.masked_fill(
            score,
            attention_mask.view(batch_size, 1, len_q, len_k) == mindspore.tensor(False),
            ops.scalar_to_tensor(float("-inf"), dtype=score.dtype),
        )
        score = self.softmax(score)

        score = ops.masked_fill(
            score,
            attention_mask.view(batch_size, 1, len_q, len_k) == mindspore.tensor(False),
            ops.scalar_to_tensor(0, dtype=score.dtype),
        )
        if output_attentions:
            attn_weights = score
        else:
            attn_weights = None

        if self.dropout is not None:
            score = self.dropout(score)

        # (batch_size, num_heads, len_q, len_k) @ (batch_size, num_heads, len_k, dim_head) -> (batch_size, num_heads, len_q, dim_head)
        score = ops.matmul(score, value)

        score = score.view(batch_size, self.num_heads, len_q, self.dim_head).permute(0, 2, 1, 3)
        score = score.view(batch_size, len_q, self.num_heads * self.dim_head)

        score = self.attention_out(score)

        past_key_values = None
        if use_cache:
            past_key_values = (key, value)

        return score, attn_weights, past_key_values

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeAttention.__init__(config)

Initializes an instance of the CpmBeeAttention class.

PARAMETER DESCRIPTION
self

The instance of the class.

config

The configuration object containing the following attributes:

  • hidden_size (int): The dimension of the model.
  • num_attention_heads (int): The number of attention heads.
  • dim_head (int): The dimension of each attention head.
  • ms_dtype: The data type used for the linear layers.
  • dropout_p (float, optional): The probability of an element to be zeroed during dropout. If not provided, no dropout is applied.

TYPE: CpmBeeConfig

RETURNS DESCRIPTION

None

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
def __init__(self, config: CpmBeeConfig):
    """
    Initializes an instance of the CpmBeeAttention class.

    Args:
        self: The instance of the class.
        config (CpmBeeConfig):
            The configuration object containing the following attributes:

            - hidden_size (int): The dimension of the model.
            - num_attention_heads (int): The number of attention heads.
            - dim_head (int): The dimension of each attention head.
            - ms_dtype: The data type used for the linear layers.
            - dropout_p (float, optional): The probability of an element to be zeroed during dropout.
            If not provided, no dropout is applied.

    Returns:
        None

    Raises:
        None
    """
    super().__init__()
    self.dim_model = config.hidden_size
    self.num_heads = config.num_attention_heads
    self.dim_head = config.dim_head

    self.project_q = CpmBeeLinear(self.dim_model, self.num_heads * self.dim_head, dtype=config.ms_dtype)
    self.project_k = CpmBeeLinear(self.dim_model, self.num_heads * self.dim_head, dtype=config.ms_dtype)
    self.project_v = CpmBeeLinear(self.dim_model, self.num_heads * self.dim_head, dtype=config.ms_dtype)

    self.attention_out = CpmBeeLinear(self.num_heads * self.dim_head, self.dim_model, dtype=config.ms_dtype)

    self.softmax = nn.Softmax(axis=-1)

    if config.dropout_p is not None:
        self.dropout = nn.Dropout(p=config.dropout_p)
    else:
        self.dropout = None

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeAttention.forward(hidden_q, hidden_kv, attention_mask, position_bias, output_attentions=False, past_key_values=None, use_cache=None)

PARAMETER DESCRIPTION
hidden_q

Input of transformer block(self-attention block). It can be the raw embedding of a batch of sequences.

TYPE: `mindspore.Tensor`

hidden_kv

Tensor key_value and query of shape (batch, len_k, dim_model)

TYPE: `mindspore.Tensor` of shape `(batch, len_k, dim_model)`

attention_mask

Avoid invalid areas to participate in the calculation of self-attention.

TYPE: `mindspore.Tensor` of shape `(batch, len_seq, len_seq)`

position_bias

Provide positional information to self-attention block.

TYPE: `mindspore.Tensor` of shape `(batch, len_seq, len_seq)`

output_attentions

Whether or not to return the attentions tensors of all attention layers.

TYPE: `bool`, *optional* DEFAULT: False

past_key_values

Cached past key and value projection states.

TYPE: `Tuple[mindspore.Tensor, mindspore.Tensor]`, *optional* DEFAULT: None

use_cache

If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).

TYPE: `bool`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
def forward(
    self,
    hidden_q: mindspore.Tensor,
    hidden_kv: mindspore.Tensor,
    attention_mask: mindspore.Tensor,
    position_bias: mindspore.Tensor,
    output_attentions: Optional[bool] = False,
    past_key_values: Optional[Tuple[mindspore.Tensor, mindspore.Tensor]] = None,
    use_cache: Optional[bool] = None,
):
    """
    Args:
        hidden_q (`mindspore.Tensor`):
            Input of transformer block(self-attention block). It can be the raw embedding of a batch of sequences.
        hidden_kv (`mindspore.Tensor` of shape `(batch, len_k, dim_model)`)):
            Tensor *key_value* and *query* of shape `(batch, len_k, dim_model)`
        attention_mask (`mindspore.Tensor` of shape `(batch, len_seq, len_seq)`):
            Avoid invalid areas to participate in the calculation of self-attention.
        position_bias (`mindspore.Tensor` of shape `(batch, len_seq, len_seq)`):
            Provide positional information to self-attention block.
        output_attentions (`bool`, *optional*):
            Whether or not to return the attentions tensors of all attention layers.
        past_key_values (`Tuple[mindspore.Tensor, mindspore.Tensor]`, *optional*):
            Cached past key and value projection states.
        use_cache (`bool`, *optional*):
            If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
            (see `past_key_values`).
    """
    batch_size = hidden_q.shape[0]
    len_q = hidden_q.shape[1]
    len_k = hidden_kv.shape[1]

    query = self.project_q(hidden_q)
    key = self.project_k(hidden_kv)
    value = self.project_v(hidden_kv)

    query = query.view(batch_size, len_q, self.num_heads, self.dim_head).permute(0, 2, 1, 3)
    key = key.view(batch_size, len_k, self.num_heads, self.dim_head).permute(0, 2, 1, 3)
    value = value.view(batch_size, len_k, self.num_heads, self.dim_head).permute(0, 2, 1, 3)

    if past_key_values is not None:
        key = ops.cat([past_key_values[0], key], axis=-2)
        value = ops.cat([past_key_values[1], value], axis=-2)
        len_k = key.shape[-2]

    # (batch_size, num_heads, len_q, dim_head) @ (batch_size, num_heads, dim_head, len_k) -> (batch_size, num_heads, len_q, len_k)
    score = ops.matmul(query, key.swapaxes(-1, -2)) / math.sqrt(self.dim_head)
    score = score + position_bias

    score = ops.masked_fill(
        score,
        attention_mask.view(batch_size, 1, len_q, len_k) == mindspore.tensor(False),
        ops.scalar_to_tensor(float("-inf"), dtype=score.dtype),
    )
    score = self.softmax(score)

    score = ops.masked_fill(
        score,
        attention_mask.view(batch_size, 1, len_q, len_k) == mindspore.tensor(False),
        ops.scalar_to_tensor(0, dtype=score.dtype),
    )
    if output_attentions:
        attn_weights = score
    else:
        attn_weights = None

    if self.dropout is not None:
        score = self.dropout(score)

    # (batch_size, num_heads, len_q, len_k) @ (batch_size, num_heads, len_k, dim_head) -> (batch_size, num_heads, len_q, dim_head)
    score = ops.matmul(score, value)

    score = score.view(batch_size, self.num_heads, len_q, self.dim_head).permute(0, 2, 1, 3)
    score = score.view(batch_size, len_q, self.num_heads * self.dim_head)

    score = self.attention_out(score)

    past_key_values = None
    if use_cache:
        past_key_values = (key, value)

    return score, attn_weights, past_key_values

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeBeamHypotheses

Bases: BeamHypotheses

This class represents a set of beam hypotheses for the CpmBee model. It is derived from the BeamHypotheses class.

The CpmBeeBeamHypotheses class is used to store and manage a list of beam hypotheses along with their scores and beam indices. Each hypothesis consists of a sequence of predicted tokens and a corresponding sum of log probabilities. The class provides methods to add new hypotheses, update the list of hypotheses, and retrieve the best hypotheses based on their scores.

ATTRIBUTE DESCRIPTION
beams

A list of tuples representing the beam hypotheses. Each tuple contains the hypothesis score, the predicted token sequence, and the beam indices.

TYPE: List[Tuple[float, List, Optional[Tensor]]]

worst_score

The score of the worst hypothesis in the list.

TYPE: float

num_beams

The maximum number of beam hypotheses to be stored.

TYPE: int

length_penalty

The length penalty factor applied to the hypothesis scores.

TYPE: float

METHOD DESCRIPTION
add

Add a new hypothesis to the list of beam hypotheses. The hypothesis is represented by a sequence of predicted tokens and its sum of log probabilities. Optionally, the beam indices can also be provided.

update

Update the list of beam hypotheses by removing the worst hypothesis if the maximum number of hypotheses is exceeded.

get_best

Retrieve the best num_best beam hypotheses based on their scores. The hypotheses are returned as a list of tuples, where each tuple contains the hypothesis score, the predicted token sequence, and the beam indices.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
class CpmBeeBeamHypotheses(BeamHypotheses):

    """
    This class represents a set of beam hypotheses for the CpmBee model. It is derived from the BeamHypotheses class.

    The CpmBeeBeamHypotheses class is used to store and manage a list of beam hypotheses along with their scores
    and beam indices. Each hypothesis consists of a sequence of predicted tokens and a corresponding sum of log
    probabilities. The class provides methods to add new hypotheses, update the list of hypotheses, and retrieve
    the best hypotheses based on their scores.

    Attributes:
        beams (List[Tuple[float, List, Optional[mindspore.Tensor]]]): A list of tuples representing the beam hypotheses.
            Each tuple contains the hypothesis score, the predicted token sequence, and the beam indices.
        worst_score (float): The score of the worst hypothesis in the list.
        num_beams (int): The maximum number of beam hypotheses to be stored.
        length_penalty (float): The length penalty factor applied to the hypothesis scores.

    Methods:
        add:
            Add a new hypothesis to the list of beam hypotheses. The hypothesis is represented by a sequence of
            predicted tokens and its sum of log probabilities. Optionally, the beam indices can also be provided.

        update:
            Update the list of beam hypotheses by removing the worst hypothesis if the maximum number of hypotheses
            is exceeded.

        get_best:
            Retrieve the best `num_best` beam hypotheses based on their scores. The hypotheses are returned as a list
            of tuples, where each tuple contains the hypothesis score, the predicted token sequence, and the beam indices.
    """
    def add(self, hyp: List, sum_logprobs: float, beam_indices: Optional[mindspore.Tensor] = None):
        """
        Add a new hypothesis to the list.
        """
        score = sum_logprobs / (len(hyp) ** self.length_penalty)
        if len(self) < self.num_beams or score > self.worst_score:
            self.beams.append((score, hyp, beam_indices))
            if len(self) > self.num_beams:
                sorted_next_scores = sorted([(s, idx) for idx, (s, _, _) in enumerate(self.beams)])
                del self.beams[sorted_next_scores[0][1]]
                self.worst_score = sorted_next_scores[1][0]
            else:
                self.worst_score = min(score, self.worst_score)

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeBeamHypotheses.add(hyp, sum_logprobs, beam_indices=None)

Add a new hypothesis to the list.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
def add(self, hyp: List, sum_logprobs: float, beam_indices: Optional[mindspore.Tensor] = None):
    """
    Add a new hypothesis to the list.
    """
    score = sum_logprobs / (len(hyp) ** self.length_penalty)
    if len(self) < self.num_beams or score > self.worst_score:
        self.beams.append((score, hyp, beam_indices))
        if len(self) > self.num_beams:
            sorted_next_scores = sorted([(s, idx) for idx, (s, _, _) in enumerate(self.beams)])
            del self.beams[sorted_next_scores[0][1]]
            self.worst_score = sorted_next_scores[1][0]
        else:
            self.worst_score = min(score, self.worst_score)

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeBeamSearchScorer

Bases: BeamSearchScorer

Override BeamSearchScorer for CPMBee to support:

  1. Replace beam_tokens by beam_states, containing idx, ans, nx_token_id...
  2. The process will update the beam_states
  3. The finalize will just return the best hypotheses as a list.
Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
class CpmBeeBeamSearchScorer(BeamSearchScorer):
    """
    Override BeamSearchScorer for CPMBee to support:

    1. Replace beam_tokens by beam_states, containing `idx`, `ans`, `nx_token_id`...
    2. The `process` will update the beam_states
    3. The `finalize` will just return the best hypotheses as a list.
    """
    def __init__(
        self,
        batch_size: int,
        num_beams: int,
        length_penalty: Optional[float] = 1.0,
        do_early_stopping: Optional[Union[bool, str]] = False,
        num_beam_hyps_to_keep: Optional[int] = 1,
        num_beam_groups: Optional[int] = 1,
        max_length: Optional[int] = None,
        **model_kwargs,
    ):
        """
        Initializes the CpmBeeBeamSearchScorer object.

        Args:
            batch_size (int): The batch size for beam search.
            num_beams (int): The number of beams for beam search.
            length_penalty (float, optional): The length penalty for beam search. Defaults to 1.0.
            do_early_stopping (bool or str, optional): Flag to indicate if early stopping should be performed.
                Defaults to False.
            num_beam_hyps_to_keep (int, optional): The number of beam hypotheses to keep. Defaults to 1.
            num_beam_groups (int, optional): The number of beam groups for beam search. Defaults to 1.
            max_length (int, optional): The maximum length for beam search. Defaults to None.
            **model_kwargs: Additional model-specific keyword arguments.

        Returns:
            None.

        Raises:
            ValueError: If the provided batch size, num_beams, num_beam_groups, or max_length is not a positive integer.
            TypeError: If the provided length_penalty is not a float or if do_early_stopping is not a bool or str.
            RuntimeError: If an error occurs during initialization.
        """
        super().__init__(batch_size, num_beams, length_penalty, do_early_stopping, num_beam_hyps_to_keep, num_beam_groups, max_length)
        self.num_beams = num_beams
        self.length_penalty = length_penalty
        self.do_early_stopping = do_early_stopping
        self.num_beam_hyps_to_keep = num_beam_hyps_to_keep
        self.num_beam_groups = num_beam_groups
        self.group_size = self.num_beams // self.num_beam_groups

        self._is_init = False
        self._beam_hyps = [
            CpmBeeBeamHypotheses(
                num_beams=self.num_beams,
                length_penalty=self.length_penalty,
                early_stopping=self.do_early_stopping,
                max_length=max_length,
            )
            for _ in range(batch_size)
        ]
        self._done = mindspore.tensor([False for _ in range(batch_size)], dtype=mindspore.bool_)

        self.beam_states = []
        for sent_id in range(batch_size):
            instance_beam_states = []

            for _ in range(self.num_beams):
                instance_beam_states.append(
                    {
                        "idx": 0,
                        "ans": [],
                        "nx_token_id": 6,
                        "nx_token_sub": 0,
                        "nx_segment_id": model_kwargs["other_info"][sent_id]["predict_segments"][0][0],
                        "nx_position": 0,
                    }
                )
            self.beam_states.append(instance_beam_states)

    def process(
        self,
        batch_size: int,
        cur_len: int,
        _next_scores: mindspore.Tensor,
        next_scores: mindspore.Tensor,
        next_tokens: mindspore.Tensor,
        vocab_size: Optional[int] = None,
        pad_token_id: Optional[int] = None,
        bos_token_id: Optional[int] = None,
        eos_token_id: Optional[Union[int, List[int]]] = None,
        max_length: Optional[int] = None,
        ext_table_sub_cpu: Optional[mindspore.Tensor] = None,
        ext_table_ids_cpu: Optional[mindspore.Tensor] = None,
        **model_kwargs,
    ) -> Tuple[mindspore.Tensor]:
        """
        Process the beam search for the CpmBeeBeamSearchScorer.

        Args:
            self: The instance of the CpmBeeBeamSearchScorer class.
            batch_size (int): The batch size for processing.
            cur_len (int): The current length of the sequence being processed.
            _next_scores (mindspore.Tensor): The scores for the next tokens.
            next_scores (mindspore.Tensor): The scores for the next tokens.
            next_tokens (mindspore.Tensor): The tokens for the next sequence.
            vocab_size (Optional[int]): The size of the vocabulary. Defaults to None.
            pad_token_id (Optional[int]): The token ID for padding. Defaults to None.
            bos_token_id (Optional[int]): The token ID for the beginning of sequence. Defaults to None.
            eos_token_id (Optional[Union[int, List[int]]]): The token ID for the end of sequence. Defaults to None.
            max_length (Optional[int]): The maximum length of the sequence. Defaults to None.
            ext_table_sub_cpu (Optional[mindspore.Tensor]): The CPU tensor for extended table sub.
            ext_table_ids_cpu (Optional[mindspore.Tensor]): The CPU tensor for extended table IDs.
            **model_kwargs: Additional keyword arguments for the model.

        Returns:
            Tuple[mindspore.Tensor]: A tuple containing the next beam scores, next beam states, and next beam indices.

        Raises:
            AssertionError: If the length of next_instance_beam_states is not equal to zero when cur_len is equal to
                max_length, or not equal to self.num_beams otherwise.

        """
        next_beam_state = []
        for sent_id in range(batch_size):
            self._done[sent_id] = self._done[sent_id] or self._beam_hyps[sent_id].is_done(
                next_scores[sent_id].max().item(), cur_len
            )
            if self._done[sent_id]:
                next_beam_state.append(
                    [
                        (
                            {
                                "idx": 0,
                                "ans": [],
                                "nx_token_id": pad_token_id,
                                "nx_token_sub": 0,
                                "nx_segment_id": 0,
                                "nx_position": 0,
                            },
                            0,
                            0,
                        )
                    ]
                    * self.num_beams
                )
                continue

            next_instance_beam_states = []

            for idx, value in zip(next_tokens[sent_id], next_scores[sent_id]):
                beam_id = ops.div(idx, _next_scores.shape[-1], rounding_mode="floor").item()
                word_id = (idx % _next_scores.shape[-1]).item()

                curr_info = self.beam_states[sent_id][beam_id]
                if (
                    word_id == eos_token_id
                    and (curr_info["idx"] + 1 == len(model_kwargs["other_info"][sent_id]["predict_segments"]))
                ) or cur_len == max_length:
                    self._beam_hyps[sent_id].add(
                        self.beam_states[sent_id][beam_id]["ans"]
                        + [
                            (
                                word_id,
                                model_kwargs["other_info"][sent_id]["predict_segments"][curr_info["idx"]][1],
                            )
                        ],
                        value.item(),
                    )
                elif word_id == eos_token_id:
                    next_instance_beam_states.append(
                        (
                            {
                                "idx": curr_info["idx"] + 1,
                                "ans": curr_info["ans"]
                                + [
                                    (
                                        word_id,
                                        model_kwargs["other_info"][sent_id]["predict_segments"][curr_info["idx"]][1],
                                    )
                                ],
                                "nx_token_id": bos_token_id,
                                "nx_token_sub": 0,
                                "nx_segment_id": model_kwargs["other_info"][sent_id]["predict_segments"][
                                    curr_info["idx"] + 1
                                ][0],
                                "nx_position": 0,
                            },
                            value.item(),
                            sent_id * self.num_beams + beam_id,
                        )
                    )

                else:
                    raw_word_id = word_id
                    word_id_sub = 0
                    if word_id >= vocab_size:
                        word_id -= vocab_size
                        word_id_sub = int(ext_table_sub_cpu[word_id].item())
                        word_id = int(ext_table_ids_cpu[word_id].item())

                    next_instance_beam_states.append(
                        (
                            {
                                "idx": curr_info["idx"],
                                "ans": curr_info["ans"]
                                + [
                                    (
                                        raw_word_id,
                                        model_kwargs["other_info"][sent_id]["predict_segments"][curr_info["idx"]][1],
                                    )
                                ],
                                "nx_token_id": word_id,
                                "nx_token_sub": word_id_sub,
                                "nx_segment_id": curr_info["nx_segment_id"],
                                "nx_position": curr_info["nx_position"] + 1,
                            },
                            value.item(),
                            sent_id * self.num_beams + beam_id,
                        )
                    )

                if len(next_instance_beam_states) == self.num_beams:
                    break
            assert len(next_instance_beam_states) == 0 if cur_len == max_length else self.num_beams
            next_beam_state.append(next_instance_beam_states)

        if cur_len == max_length:
            return None

        beam_reorder_idx = []
        beam_new_scores = []
        beam_states = []
        for sent_id in range(batch_size):
            instance_beam_states = []
            for beam_id in range(self.num_beams):
                state, value, beam_idx = next_beam_state[sent_id][beam_id]
                beam_reorder_idx.append(beam_idx)
                beam_new_scores.append(value)
                instance_beam_states.append(state)
            beam_states.append(instance_beam_states)
        self.beam_states = beam_states

        return UserDict(
            {
                "next_beam_scores": mindspore.tensor(beam_new_scores).view(-1),
                "next_beam_states": beam_states,
                "next_beam_indices": mindspore.tensor(beam_reorder_idx, dtype=mindspore.int32).view(-1),
            }
        )

    def finalize(self) -> Tuple[mindspore.Tensor]:
        """
        Finalizes the beam search scoring process and returns the best hypotheses.

        Args:
            self: The instance of the CpmBeeBeamSearchScorer class.

        Returns:
            A tuple containing mindspore.Tensor objects representing the best hypotheses.

        Raises:
            None.

        This method iterates over the beam hypotheses generated during the beam search process and selects the
        best hypothesis from each beam. The best hypothesis is determined based on the maximum score assigned to it.
        The selected best hypotheses are then returned as a tuple of mindspore.Tensor objects.

        Note:
            - The beam hypotheses are internally stored in the _beam_hyps attribute of the CpmBeeBeamSearchScorer instance.
            - The best hypothesis is determined by selecting the hypothesis with the maximum score from each beam.

        Example:
            ```python
            >>> scorer = CpmBeeBeamSearchScorer()
            >>> results = scorer.finalize()
            >>> # results contains the best hypotheses as mindspore.Tensor objects.
            ```
        """
        results = []
        for _, hypotheses in enumerate(self._beam_hyps):
            best_hyp = max(hypotheses.beams, key=lambda x: x[0])[1]
            results.append(best_hyp)
        return results

    @staticmethod
    def apply_repetition_penalty(
        logits,
        batch_size,
        num_beams,
        prev_output_tokens,
        repetition_penalty,
        start_idx=None,
        end_idx=None,
        window_size=None,
    ):
        """
        Applies repetition penalty to the logits for beam search in the CpmBeeBeamSearchScorer class.

        Args:
            logits (Tensor): The logits representing the scores for each token in the vocabulary.
                Shape: (batch_size * num_beams, vocab_size).
            batch_size (int): The size of the batch.
            num_beams (int): The number of beams used in the beam search.
            prev_output_tokens (Tensor): The previously generated tokens. Shape: (batch_size * num_beams, sequence_length).
            repetition_penalty (float): The coefficient for the repetition penalty. Must be >= 1.
            start_idx (int, optional): The start index of the window for calculating repetition penalty. Defaults to None.
            end_idx (int, optional): The end index of the window for calculating repetition penalty. Defaults to None.
            window_size (int, optional): The size of the window for calculating repetition penalty. Defaults to None.

        Returns:
            None

        Raises:
            AssertionError: If repetition_penalty is less than 1.

        """
        # only conduct repetition penalty for the output
        assert repetition_penalty >= 1, "repetition penalty coefficient should >= 1"
        # repetition penalty (from CTRL paper https://arxiv.org/abs/1909.05858)
        for i in range(batch_size * num_beams):
            if start_idx is None or end_idx is None:
                output_tokens = prev_output_tokens[i].tolist()
            else:
                if end_idx >= start_idx:
                    if window_size:
                        output_tokens = prev_output_tokens[i][
                            max(start_idx, end_idx + 1 - window_size) : end_idx + 1
                        ].tolist()
                    else:
                        output_tokens = prev_output_tokens[i][start_idx : end_idx + 1].tolist()
                else:
                    output_tokens = []
            for previous_token in set(output_tokens):
                # if score < 0 then repetition penalty has to
                # multiplied to reduce the previous token probability
                if logits[i, previous_token] < 0:
                    logits[i, previous_token] *= repetition_penalty
                else:
                    logits[i, previous_token] /= repetition_penalty

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeBeamSearchScorer.__init__(batch_size, num_beams, length_penalty=1.0, do_early_stopping=False, num_beam_hyps_to_keep=1, num_beam_groups=1, max_length=None, **model_kwargs)

Initializes the CpmBeeBeamSearchScorer object.

PARAMETER DESCRIPTION
batch_size

The batch size for beam search.

TYPE: int

num_beams

The number of beams for beam search.

TYPE: int

length_penalty

The length penalty for beam search. Defaults to 1.0.

TYPE: float DEFAULT: 1.0

do_early_stopping

Flag to indicate if early stopping should be performed. Defaults to False.

TYPE: bool or str DEFAULT: False

num_beam_hyps_to_keep

The number of beam hypotheses to keep. Defaults to 1.

TYPE: int DEFAULT: 1

num_beam_groups

The number of beam groups for beam search. Defaults to 1.

TYPE: int DEFAULT: 1

max_length

The maximum length for beam search. Defaults to None.

TYPE: int DEFAULT: None

**model_kwargs

Additional model-specific keyword arguments.

DEFAULT: {}

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
ValueError

If the provided batch size, num_beams, num_beam_groups, or max_length is not a positive integer.

TypeError

If the provided length_penalty is not a float or if do_early_stopping is not a bool or str.

RuntimeError

If an error occurs during initialization.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
def __init__(
    self,
    batch_size: int,
    num_beams: int,
    length_penalty: Optional[float] = 1.0,
    do_early_stopping: Optional[Union[bool, str]] = False,
    num_beam_hyps_to_keep: Optional[int] = 1,
    num_beam_groups: Optional[int] = 1,
    max_length: Optional[int] = None,
    **model_kwargs,
):
    """
    Initializes the CpmBeeBeamSearchScorer object.

    Args:
        batch_size (int): The batch size for beam search.
        num_beams (int): The number of beams for beam search.
        length_penalty (float, optional): The length penalty for beam search. Defaults to 1.0.
        do_early_stopping (bool or str, optional): Flag to indicate if early stopping should be performed.
            Defaults to False.
        num_beam_hyps_to_keep (int, optional): The number of beam hypotheses to keep. Defaults to 1.
        num_beam_groups (int, optional): The number of beam groups for beam search. Defaults to 1.
        max_length (int, optional): The maximum length for beam search. Defaults to None.
        **model_kwargs: Additional model-specific keyword arguments.

    Returns:
        None.

    Raises:
        ValueError: If the provided batch size, num_beams, num_beam_groups, or max_length is not a positive integer.
        TypeError: If the provided length_penalty is not a float or if do_early_stopping is not a bool or str.
        RuntimeError: If an error occurs during initialization.
    """
    super().__init__(batch_size, num_beams, length_penalty, do_early_stopping, num_beam_hyps_to_keep, num_beam_groups, max_length)
    self.num_beams = num_beams
    self.length_penalty = length_penalty
    self.do_early_stopping = do_early_stopping
    self.num_beam_hyps_to_keep = num_beam_hyps_to_keep
    self.num_beam_groups = num_beam_groups
    self.group_size = self.num_beams // self.num_beam_groups

    self._is_init = False
    self._beam_hyps = [
        CpmBeeBeamHypotheses(
            num_beams=self.num_beams,
            length_penalty=self.length_penalty,
            early_stopping=self.do_early_stopping,
            max_length=max_length,
        )
        for _ in range(batch_size)
    ]
    self._done = mindspore.tensor([False for _ in range(batch_size)], dtype=mindspore.bool_)

    self.beam_states = []
    for sent_id in range(batch_size):
        instance_beam_states = []

        for _ in range(self.num_beams):
            instance_beam_states.append(
                {
                    "idx": 0,
                    "ans": [],
                    "nx_token_id": 6,
                    "nx_token_sub": 0,
                    "nx_segment_id": model_kwargs["other_info"][sent_id]["predict_segments"][0][0],
                    "nx_position": 0,
                }
            )
        self.beam_states.append(instance_beam_states)

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeBeamSearchScorer.apply_repetition_penalty(logits, batch_size, num_beams, prev_output_tokens, repetition_penalty, start_idx=None, end_idx=None, window_size=None) staticmethod

Applies repetition penalty to the logits for beam search in the CpmBeeBeamSearchScorer class.

PARAMETER DESCRIPTION
logits

The logits representing the scores for each token in the vocabulary. Shape: (batch_size * num_beams, vocab_size).

TYPE: Tensor

batch_size

The size of the batch.

TYPE: int

num_beams

The number of beams used in the beam search.

TYPE: int

prev_output_tokens

The previously generated tokens. Shape: (batch_size * num_beams, sequence_length).

TYPE: Tensor

repetition_penalty

The coefficient for the repetition penalty. Must be >= 1.

TYPE: float

start_idx

The start index of the window for calculating repetition penalty. Defaults to None.

TYPE: int DEFAULT: None

end_idx

The end index of the window for calculating repetition penalty. Defaults to None.

TYPE: int DEFAULT: None

window_size

The size of the window for calculating repetition penalty. Defaults to None.

TYPE: int DEFAULT: None

RETURNS DESCRIPTION

None

RAISES DESCRIPTION
AssertionError

If repetition_penalty is less than 1.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
@staticmethod
def apply_repetition_penalty(
    logits,
    batch_size,
    num_beams,
    prev_output_tokens,
    repetition_penalty,
    start_idx=None,
    end_idx=None,
    window_size=None,
):
    """
    Applies repetition penalty to the logits for beam search in the CpmBeeBeamSearchScorer class.

    Args:
        logits (Tensor): The logits representing the scores for each token in the vocabulary.
            Shape: (batch_size * num_beams, vocab_size).
        batch_size (int): The size of the batch.
        num_beams (int): The number of beams used in the beam search.
        prev_output_tokens (Tensor): The previously generated tokens. Shape: (batch_size * num_beams, sequence_length).
        repetition_penalty (float): The coefficient for the repetition penalty. Must be >= 1.
        start_idx (int, optional): The start index of the window for calculating repetition penalty. Defaults to None.
        end_idx (int, optional): The end index of the window for calculating repetition penalty. Defaults to None.
        window_size (int, optional): The size of the window for calculating repetition penalty. Defaults to None.

    Returns:
        None

    Raises:
        AssertionError: If repetition_penalty is less than 1.

    """
    # only conduct repetition penalty for the output
    assert repetition_penalty >= 1, "repetition penalty coefficient should >= 1"
    # repetition penalty (from CTRL paper https://arxiv.org/abs/1909.05858)
    for i in range(batch_size * num_beams):
        if start_idx is None or end_idx is None:
            output_tokens = prev_output_tokens[i].tolist()
        else:
            if end_idx >= start_idx:
                if window_size:
                    output_tokens = prev_output_tokens[i][
                        max(start_idx, end_idx + 1 - window_size) : end_idx + 1
                    ].tolist()
                else:
                    output_tokens = prev_output_tokens[i][start_idx : end_idx + 1].tolist()
            else:
                output_tokens = []
        for previous_token in set(output_tokens):
            # if score < 0 then repetition penalty has to
            # multiplied to reduce the previous token probability
            if logits[i, previous_token] < 0:
                logits[i, previous_token] *= repetition_penalty
            else:
                logits[i, previous_token] /= repetition_penalty

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeBeamSearchScorer.finalize()

Finalizes the beam search scoring process and returns the best hypotheses.

PARAMETER DESCRIPTION
self

The instance of the CpmBeeBeamSearchScorer class.

RETURNS DESCRIPTION
Tuple[Tensor]

A tuple containing mindspore.Tensor objects representing the best hypotheses.

This method iterates over the beam hypotheses generated during the beam search process and selects the best hypothesis from each beam. The best hypothesis is determined based on the maximum score assigned to it. The selected best hypotheses are then returned as a tuple of mindspore.Tensor objects.

Note
  • The beam hypotheses are internally stored in the _beam_hyps attribute of the CpmBeeBeamSearchScorer instance.
  • The best hypothesis is determined by selecting the hypothesis with the maximum score from each beam.
Example
>>> scorer = CpmBeeBeamSearchScorer()
>>> results = scorer.finalize()
>>> # results contains the best hypotheses as mindspore.Tensor objects.
Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
def finalize(self) -> Tuple[mindspore.Tensor]:
    """
    Finalizes the beam search scoring process and returns the best hypotheses.

    Args:
        self: The instance of the CpmBeeBeamSearchScorer class.

    Returns:
        A tuple containing mindspore.Tensor objects representing the best hypotheses.

    Raises:
        None.

    This method iterates over the beam hypotheses generated during the beam search process and selects the
    best hypothesis from each beam. The best hypothesis is determined based on the maximum score assigned to it.
    The selected best hypotheses are then returned as a tuple of mindspore.Tensor objects.

    Note:
        - The beam hypotheses are internally stored in the _beam_hyps attribute of the CpmBeeBeamSearchScorer instance.
        - The best hypothesis is determined by selecting the hypothesis with the maximum score from each beam.

    Example:
        ```python
        >>> scorer = CpmBeeBeamSearchScorer()
        >>> results = scorer.finalize()
        >>> # results contains the best hypotheses as mindspore.Tensor objects.
        ```
    """
    results = []
    for _, hypotheses in enumerate(self._beam_hyps):
        best_hyp = max(hypotheses.beams, key=lambda x: x[0])[1]
        results.append(best_hyp)
    return results

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeBeamSearchScorer.process(batch_size, cur_len, _next_scores, next_scores, next_tokens, vocab_size=None, pad_token_id=None, bos_token_id=None, eos_token_id=None, max_length=None, ext_table_sub_cpu=None, ext_table_ids_cpu=None, **model_kwargs)

Process the beam search for the CpmBeeBeamSearchScorer.

PARAMETER DESCRIPTION
self

The instance of the CpmBeeBeamSearchScorer class.

batch_size

The batch size for processing.

TYPE: int

cur_len

The current length of the sequence being processed.

TYPE: int

_next_scores

The scores for the next tokens.

TYPE: Tensor

next_scores

The scores for the next tokens.

TYPE: Tensor

next_tokens

The tokens for the next sequence.

TYPE: Tensor

vocab_size

The size of the vocabulary. Defaults to None.

TYPE: Optional[int] DEFAULT: None

pad_token_id

The token ID for padding. Defaults to None.

TYPE: Optional[int] DEFAULT: None

bos_token_id

The token ID for the beginning of sequence. Defaults to None.

TYPE: Optional[int] DEFAULT: None

eos_token_id

The token ID for the end of sequence. Defaults to None.

TYPE: Optional[Union[int, List[int]]] DEFAULT: None

max_length

The maximum length of the sequence. Defaults to None.

TYPE: Optional[int] DEFAULT: None

ext_table_sub_cpu

The CPU tensor for extended table sub.

TYPE: Optional[Tensor] DEFAULT: None

ext_table_ids_cpu

The CPU tensor for extended table IDs.

TYPE: Optional[Tensor] DEFAULT: None

**model_kwargs

Additional keyword arguments for the model.

DEFAULT: {}

RETURNS DESCRIPTION
Tuple[Tensor]

Tuple[mindspore.Tensor]: A tuple containing the next beam scores, next beam states, and next beam indices.

RAISES DESCRIPTION
AssertionError

If the length of next_instance_beam_states is not equal to zero when cur_len is equal to max_length, or not equal to self.num_beams otherwise.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
def process(
    self,
    batch_size: int,
    cur_len: int,
    _next_scores: mindspore.Tensor,
    next_scores: mindspore.Tensor,
    next_tokens: mindspore.Tensor,
    vocab_size: Optional[int] = None,
    pad_token_id: Optional[int] = None,
    bos_token_id: Optional[int] = None,
    eos_token_id: Optional[Union[int, List[int]]] = None,
    max_length: Optional[int] = None,
    ext_table_sub_cpu: Optional[mindspore.Tensor] = None,
    ext_table_ids_cpu: Optional[mindspore.Tensor] = None,
    **model_kwargs,
) -> Tuple[mindspore.Tensor]:
    """
    Process the beam search for the CpmBeeBeamSearchScorer.

    Args:
        self: The instance of the CpmBeeBeamSearchScorer class.
        batch_size (int): The batch size for processing.
        cur_len (int): The current length of the sequence being processed.
        _next_scores (mindspore.Tensor): The scores for the next tokens.
        next_scores (mindspore.Tensor): The scores for the next tokens.
        next_tokens (mindspore.Tensor): The tokens for the next sequence.
        vocab_size (Optional[int]): The size of the vocabulary. Defaults to None.
        pad_token_id (Optional[int]): The token ID for padding. Defaults to None.
        bos_token_id (Optional[int]): The token ID for the beginning of sequence. Defaults to None.
        eos_token_id (Optional[Union[int, List[int]]]): The token ID for the end of sequence. Defaults to None.
        max_length (Optional[int]): The maximum length of the sequence. Defaults to None.
        ext_table_sub_cpu (Optional[mindspore.Tensor]): The CPU tensor for extended table sub.
        ext_table_ids_cpu (Optional[mindspore.Tensor]): The CPU tensor for extended table IDs.
        **model_kwargs: Additional keyword arguments for the model.

    Returns:
        Tuple[mindspore.Tensor]: A tuple containing the next beam scores, next beam states, and next beam indices.

    Raises:
        AssertionError: If the length of next_instance_beam_states is not equal to zero when cur_len is equal to
            max_length, or not equal to self.num_beams otherwise.

    """
    next_beam_state = []
    for sent_id in range(batch_size):
        self._done[sent_id] = self._done[sent_id] or self._beam_hyps[sent_id].is_done(
            next_scores[sent_id].max().item(), cur_len
        )
        if self._done[sent_id]:
            next_beam_state.append(
                [
                    (
                        {
                            "idx": 0,
                            "ans": [],
                            "nx_token_id": pad_token_id,
                            "nx_token_sub": 0,
                            "nx_segment_id": 0,
                            "nx_position": 0,
                        },
                        0,
                        0,
                    )
                ]
                * self.num_beams
            )
            continue

        next_instance_beam_states = []

        for idx, value in zip(next_tokens[sent_id], next_scores[sent_id]):
            beam_id = ops.div(idx, _next_scores.shape[-1], rounding_mode="floor").item()
            word_id = (idx % _next_scores.shape[-1]).item()

            curr_info = self.beam_states[sent_id][beam_id]
            if (
                word_id == eos_token_id
                and (curr_info["idx"] + 1 == len(model_kwargs["other_info"][sent_id]["predict_segments"]))
            ) or cur_len == max_length:
                self._beam_hyps[sent_id].add(
                    self.beam_states[sent_id][beam_id]["ans"]
                    + [
                        (
                            word_id,
                            model_kwargs["other_info"][sent_id]["predict_segments"][curr_info["idx"]][1],
                        )
                    ],
                    value.item(),
                )
            elif word_id == eos_token_id:
                next_instance_beam_states.append(
                    (
                        {
                            "idx": curr_info["idx"] + 1,
                            "ans": curr_info["ans"]
                            + [
                                (
                                    word_id,
                                    model_kwargs["other_info"][sent_id]["predict_segments"][curr_info["idx"]][1],
                                )
                            ],
                            "nx_token_id": bos_token_id,
                            "nx_token_sub": 0,
                            "nx_segment_id": model_kwargs["other_info"][sent_id]["predict_segments"][
                                curr_info["idx"] + 1
                            ][0],
                            "nx_position": 0,
                        },
                        value.item(),
                        sent_id * self.num_beams + beam_id,
                    )
                )

            else:
                raw_word_id = word_id
                word_id_sub = 0
                if word_id >= vocab_size:
                    word_id -= vocab_size
                    word_id_sub = int(ext_table_sub_cpu[word_id].item())
                    word_id = int(ext_table_ids_cpu[word_id].item())

                next_instance_beam_states.append(
                    (
                        {
                            "idx": curr_info["idx"],
                            "ans": curr_info["ans"]
                            + [
                                (
                                    raw_word_id,
                                    model_kwargs["other_info"][sent_id]["predict_segments"][curr_info["idx"]][1],
                                )
                            ],
                            "nx_token_id": word_id,
                            "nx_token_sub": word_id_sub,
                            "nx_segment_id": curr_info["nx_segment_id"],
                            "nx_position": curr_info["nx_position"] + 1,
                        },
                        value.item(),
                        sent_id * self.num_beams + beam_id,
                    )
                )

            if len(next_instance_beam_states) == self.num_beams:
                break
        assert len(next_instance_beam_states) == 0 if cur_len == max_length else self.num_beams
        next_beam_state.append(next_instance_beam_states)

    if cur_len == max_length:
        return None

    beam_reorder_idx = []
    beam_new_scores = []
    beam_states = []
    for sent_id in range(batch_size):
        instance_beam_states = []
        for beam_id in range(self.num_beams):
            state, value, beam_idx = next_beam_state[sent_id][beam_id]
            beam_reorder_idx.append(beam_idx)
            beam_new_scores.append(value)
            instance_beam_states.append(state)
        beam_states.append(instance_beam_states)
    self.beam_states = beam_states

    return UserDict(
        {
            "next_beam_scores": mindspore.tensor(beam_new_scores).view(-1),
            "next_beam_states": beam_states,
            "next_beam_indices": mindspore.tensor(beam_reorder_idx, dtype=mindspore.int32).view(-1),
        }
    )

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeBucketPositionBias

Bases: Module

This class represents a position bias computation module in the CpmBee model. It is used to calculate the relative position buckets for attention mechanism.

ATTRIBUTE DESCRIPTION
num_heads

The number of attention heads.

TYPE: int

num_buckets

The number of position bias buckets.

TYPE: int

num_segment_bucket

The number of segment buckets used for position bias.

TYPE: int

max_distance

The maximum distance for position bias calculation.

TYPE: int

relative_attention_bias

The learnable parameter used for relative attention bias calculation.

TYPE: Parameter

METHOD DESCRIPTION
__init__

Initializes the CpmBeeBucketPositionBias instance.

forward

Constructs the position bias based on the given query and key positions and relative buckets.

_position_bucket

Computes the position bucket for the given relative position.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
class CpmBeeBucketPositionBias(nn.Module):

    """
    This class represents a position bias computation module in the CpmBee model.
    It is used to calculate the relative position buckets for attention mechanism.

    Attributes:
        num_heads (int): The number of attention heads.
        num_buckets (int): The number of position bias buckets.
        num_segment_bucket (int): The number of segment buckets used for position bias.
        max_distance (int): The maximum distance for position bias calculation.
        relative_attention_bias (mindspore.Parameter): The learnable parameter used for relative attention bias calculation.

    Methods:
        __init__:
            Initializes the CpmBeeBucketPositionBias instance.

        forward:
            Constructs the position bias based on the given query and key positions and relative buckets.

        _position_bucket:
            Computes the position bucket for the given relative position.

    """
    def __init__(self, config: CpmBeeConfig) -> None:
        """Initializes an instance of the CpmBeeBucketPositionBias class.

        Args:
            self: The instance of the class.
            config (CpmBeeConfig):
                The configuration object containing various parameters.

                - num_attention_heads (int): The number of attention heads.
                - position_bias_num_buckets (int): The number of buckets for position bias.
                - position_bias_num_segment_buckets (int): The number of buckets for segment bias.
                - position_bias_max_distance (int): The maximum distance for position bias.
                - ms_dtype: The dtype for the position bias parameter.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__()

        self.num_heads = config.num_attention_heads
        self.num_buckets = config.position_bias_num_buckets
        self.num_segment_bucket = config.position_bias_num_segment_buckets
        self.max_distance = config.position_bias_max_distance

        self.relative_attention_bias = Parameter(
            ops.zeros(
                config.position_bias_num_buckets + config.position_bias_num_segment_buckets,
                config.num_attention_heads,
                dtype=config.ms_dtype,
            ),
        )

    def forward(self, query_pos: mindspore.Tensor, key_pos: mindspore.Tensor, rel_buckets: mindspore.Tensor):
        """
        This method forwards relative position bias embeddings based on the input query positions, key positions,
        and relative buckets.

        Args:
            self (CpmBeeBucketPositionBias): An instance of the CpmBeeBucketPositionBias class.
            query_pos (mindspore.Tensor): A tensor representing the positions of queries in the input sequence.
            key_pos (mindspore.Tensor): A tensor representing the positions of keys in the input sequence.
            rel_buckets (mindspore.Tensor): A tensor containing relative position buckets.

        Returns:
            None: This method does not return any value explicitly.
                The forwarded embeddings are stored in the 'embeds' variable within the method.

        Raises:
            AssertionError:
                - If the number of batches in key_pos and query_pos tensors are not equal.
                - If the number of batches in rel_buckets and key_pos tensors are not equal.
                - If the number of query positions in the rel_buckets tensor does not match the query positions tensor.
                - If the number of key positions in the rel_buckets tensor does not match the key positions tensor.
        """
        batch = key_pos.shape[0]
        keylen = key_pos.shape[1]
        querylen = query_pos.shape[1]

        if key_pos.shape[0] != query_pos.shape[0]:
            raise AssertionError(
                f"key_pos.shape[0] should be equal to query_pos.shape[0], but got {key_pos.shape[0]} and {query_pos.shape[0]}!"
            )
        if rel_buckets.shape[0] != batch:
            raise AssertionError(
                f"rel_buckets.shape[0] should be equal to batch, but got {rel_buckets.shape[0]} and {batch}!"
            )
        if rel_buckets.shape[1] != querylen:
            raise AssertionError(
                f"rel_buckets.shape[1] should be equal to querylen, but got {rel_buckets.shape[1]} and {querylen}!"
            )
        if rel_buckets.shape[2] != keylen:
            raise AssertionError(
                f"rel_buckets.shape[2] should be equal to keylen, but got {rel_buckets.shape[2]} and {keylen}!"
            )

        relative_position_bucket = rel_buckets - 1 + self.num_buckets

        inner_segment_bucket = self._position_bucket(
            key_pos[..., None, :] - query_pos[..., :, None],
            num_buckets=self.num_buckets,
            max_distance=self.max_distance,
        )
        relative_position_bucket = ops.where(
            rel_buckets == 0,
            inner_segment_bucket,
            relative_position_bucket,
        )

        embeds = embedding(relative_position_bucket, self.relative_attention_bias)
        embeds = embeds.permute(0, 3, 1, 2)
        return embeds

    def _position_bucket(self, relative_position, num_buckets=32, max_distance=128):
        """
        This method calculates the position bucket for a given relative position within a specified range.

        Args:
            self: The instance of the CpmBeeBucketPositionBias class.
            relative_position (int): The relative position for which the bucket needs to be calculated.
            num_buckets (int, optional): The number of buckets to categorize the relative position into. Defaults to 32.
            max_distance (int, optional): The maximum distance for categorizing the relative position. Defaults to 128.

        Returns:
            None:
                This method does not return a value as it directly updates the 'relative_buckets' attribute of
                the CpmBeeBucketPositionBias instance.

        Raises:
            ValueError: If the 'relative_position' or 'num_buckets' is not a positive integer.
            ValueError: If the 'max_distance' is not a positive integer greater than 0.
            TypeError: If the 'relative_position', 'num_buckets', or 'max_distance' is not of type int.
            ValueError: If the 'num_buckets' is less than or equal to 0.
            ValueError: If the 'max_distance' is less than or equal to 0.
        """
        relative_buckets = 0
        num_buckets //= 2
        relative_buckets = (relative_position > 0).to(mindspore.int32) * num_buckets
        relative_position = ops.abs(relative_position)
        max_exact = num_buckets // 2
        is_small = relative_position < max_exact
        relative_postion_if_large = max_exact + (
            ops.log(relative_position.float() / max_exact)
            / math.log(max_distance / max_exact)
            * (num_buckets - max_exact)
        ).to(mindspore.int32)
        relative_postion_if_large = ops.minimum(
            relative_postion_if_large,
            ops.full_like(relative_postion_if_large, num_buckets - 1),
        )
        relative_buckets += ops.where(is_small, relative_position.to(mindspore.int32), relative_postion_if_large)
        return relative_buckets

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeBucketPositionBias.__init__(config)

Initializes an instance of the CpmBeeBucketPositionBias class.

PARAMETER DESCRIPTION
self

The instance of the class.

config

The configuration object containing various parameters.

  • num_attention_heads (int): The number of attention heads.
  • position_bias_num_buckets (int): The number of buckets for position bias.
  • position_bias_num_segment_buckets (int): The number of buckets for segment bias.
  • position_bias_max_distance (int): The maximum distance for position bias.
  • ms_dtype: The dtype for the position bias parameter.

TYPE: CpmBeeConfig

RETURNS DESCRIPTION
None

None.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
def __init__(self, config: CpmBeeConfig) -> None:
    """Initializes an instance of the CpmBeeBucketPositionBias class.

    Args:
        self: The instance of the class.
        config (CpmBeeConfig):
            The configuration object containing various parameters.

            - num_attention_heads (int): The number of attention heads.
            - position_bias_num_buckets (int): The number of buckets for position bias.
            - position_bias_num_segment_buckets (int): The number of buckets for segment bias.
            - position_bias_max_distance (int): The maximum distance for position bias.
            - ms_dtype: The dtype for the position bias parameter.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__()

    self.num_heads = config.num_attention_heads
    self.num_buckets = config.position_bias_num_buckets
    self.num_segment_bucket = config.position_bias_num_segment_buckets
    self.max_distance = config.position_bias_max_distance

    self.relative_attention_bias = Parameter(
        ops.zeros(
            config.position_bias_num_buckets + config.position_bias_num_segment_buckets,
            config.num_attention_heads,
            dtype=config.ms_dtype,
        ),
    )

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeBucketPositionBias.forward(query_pos, key_pos, rel_buckets)

This method forwards relative position bias embeddings based on the input query positions, key positions, and relative buckets.

PARAMETER DESCRIPTION
self

An instance of the CpmBeeBucketPositionBias class.

TYPE: CpmBeeBucketPositionBias

query_pos

A tensor representing the positions of queries in the input sequence.

TYPE: Tensor

key_pos

A tensor representing the positions of keys in the input sequence.

TYPE: Tensor

rel_buckets

A tensor containing relative position buckets.

TYPE: Tensor

RETURNS DESCRIPTION
None

This method does not return any value explicitly. The forwarded embeddings are stored in the 'embeds' variable within the method.

RAISES DESCRIPTION
AssertionError
  • If the number of batches in key_pos and query_pos tensors are not equal.
  • If the number of batches in rel_buckets and key_pos tensors are not equal.
  • If the number of query positions in the rel_buckets tensor does not match the query positions tensor.
  • If the number of key positions in the rel_buckets tensor does not match the key positions tensor.
Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
def forward(self, query_pos: mindspore.Tensor, key_pos: mindspore.Tensor, rel_buckets: mindspore.Tensor):
    """
    This method forwards relative position bias embeddings based on the input query positions, key positions,
    and relative buckets.

    Args:
        self (CpmBeeBucketPositionBias): An instance of the CpmBeeBucketPositionBias class.
        query_pos (mindspore.Tensor): A tensor representing the positions of queries in the input sequence.
        key_pos (mindspore.Tensor): A tensor representing the positions of keys in the input sequence.
        rel_buckets (mindspore.Tensor): A tensor containing relative position buckets.

    Returns:
        None: This method does not return any value explicitly.
            The forwarded embeddings are stored in the 'embeds' variable within the method.

    Raises:
        AssertionError:
            - If the number of batches in key_pos and query_pos tensors are not equal.
            - If the number of batches in rel_buckets and key_pos tensors are not equal.
            - If the number of query positions in the rel_buckets tensor does not match the query positions tensor.
            - If the number of key positions in the rel_buckets tensor does not match the key positions tensor.
    """
    batch = key_pos.shape[0]
    keylen = key_pos.shape[1]
    querylen = query_pos.shape[1]

    if key_pos.shape[0] != query_pos.shape[0]:
        raise AssertionError(
            f"key_pos.shape[0] should be equal to query_pos.shape[0], but got {key_pos.shape[0]} and {query_pos.shape[0]}!"
        )
    if rel_buckets.shape[0] != batch:
        raise AssertionError(
            f"rel_buckets.shape[0] should be equal to batch, but got {rel_buckets.shape[0]} and {batch}!"
        )
    if rel_buckets.shape[1] != querylen:
        raise AssertionError(
            f"rel_buckets.shape[1] should be equal to querylen, but got {rel_buckets.shape[1]} and {querylen}!"
        )
    if rel_buckets.shape[2] != keylen:
        raise AssertionError(
            f"rel_buckets.shape[2] should be equal to keylen, but got {rel_buckets.shape[2]} and {keylen}!"
        )

    relative_position_bucket = rel_buckets - 1 + self.num_buckets

    inner_segment_bucket = self._position_bucket(
        key_pos[..., None, :] - query_pos[..., :, None],
        num_buckets=self.num_buckets,
        max_distance=self.max_distance,
    )
    relative_position_bucket = ops.where(
        rel_buckets == 0,
        inner_segment_bucket,
        relative_position_bucket,
    )

    embeds = embedding(relative_position_bucket, self.relative_attention_bias)
    embeds = embeds.permute(0, 3, 1, 2)
    return embeds

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeDenseGatedACT

Bases: Module

This class represents a dense gated activation module in the CpmBee framework. It performs a nonlinear transformation on an input tensor from one feature space to another using a gated activation function.

The class inherits from the nn.Module class.

ATTRIBUTE DESCRIPTION
w_0

An instance of the CpmBeeLinear class representing the first linear transformation.

TYPE: CpmBeeLinear

w_1

An instance of the CpmBeeLinear class representing the second linear transformation.

TYPE: CpmBeeLinear

act

An instance of the GELU activation function.

TYPE: GELU

METHOD DESCRIPTION
__init__

Initializes the CpmBeeDenseGatedACT class.

forward

Transforms an input tensor from one feature space to another via a nonlinear operation.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
class CpmBeeDenseGatedACT(nn.Module):

    """
    This class represents a dense gated activation module in the CpmBee framework.
    It performs a nonlinear transformation on an input tensor from one feature space to another using
    a gated activation function.

    The class inherits from the `nn.Module` class.

    Attributes:
        w_0 (CpmBeeLinear): An instance of the CpmBeeLinear class representing the first linear transformation.
        w_1 (CpmBeeLinear): An instance of the CpmBeeLinear class representing the second linear transformation.
        act (nn.GELU): An instance of the GELU activation function.

    Methods:
        __init__: Initializes the CpmBeeDenseGatedACT class.
        forward: Transforms an input tensor from one feature space to another via a nonlinear operation.

    """
    def __init__(self, config: CpmBeeConfig):
        """
        Initializes a new instance of the CpmBeeDenseGatedACT class.

        Args:
            self: The current CpmBeeDenseGatedACT object.
            config (CpmBeeConfig): An instance of the CpmBeeConfig class containing configuration parameters.

        Returns:
            None

        Raises:
            None
        """
        super().__init__()
        self.w_0 = CpmBeeLinear(config.hidden_size, config.dim_ff, dtype=config.ms_dtype)
        self.w_1 = CpmBeeLinear(config.hidden_size, config.dim_ff, dtype=config.ms_dtype)
        self.act = nn.GELU()

    def forward(self, hidden_states: mindspore.Tensor):
        """Transform an input tensor from one feature space to another via a nonlinear operation

        Args:
            hidden_states (`mindspore.Tensor` of shape `(batch, seq_len, dim_in)`)
        """
        gate_score = self.act(self.w_0(hidden_states))
        hidden_states = self.w_1(hidden_states)

        hidden_states = gate_score * hidden_states
        return hidden_states

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeDenseGatedACT.__init__(config)

Initializes a new instance of the CpmBeeDenseGatedACT class.

PARAMETER DESCRIPTION
self

The current CpmBeeDenseGatedACT object.

config

An instance of the CpmBeeConfig class containing configuration parameters.

TYPE: CpmBeeConfig

RETURNS DESCRIPTION

None

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
def __init__(self, config: CpmBeeConfig):
    """
    Initializes a new instance of the CpmBeeDenseGatedACT class.

    Args:
        self: The current CpmBeeDenseGatedACT object.
        config (CpmBeeConfig): An instance of the CpmBeeConfig class containing configuration parameters.

    Returns:
        None

    Raises:
        None
    """
    super().__init__()
    self.w_0 = CpmBeeLinear(config.hidden_size, config.dim_ff, dtype=config.ms_dtype)
    self.w_1 = CpmBeeLinear(config.hidden_size, config.dim_ff, dtype=config.ms_dtype)
    self.act = nn.GELU()

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeDenseGatedACT.forward(hidden_states)

Transform an input tensor from one feature space to another via a nonlinear operation

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
413
414
415
416
417
418
419
420
421
422
423
def forward(self, hidden_states: mindspore.Tensor):
    """Transform an input tensor from one feature space to another via a nonlinear operation

    Args:
        hidden_states (`mindspore.Tensor` of shape `(batch, seq_len, dim_in)`)
    """
    gate_score = self.act(self.w_0(hidden_states))
    hidden_states = self.w_1(hidden_states)

    hidden_states = gate_score * hidden_states
    return hidden_states

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeEmbeddingExt

Bases: Embedding

Contains a RotaryEmbedding.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
class CpmBeeEmbeddingExt(nn.Embedding):
    """
    Contains a RotaryEmbedding.
    """
    def __init__(self, config: CpmBeeConfig):
        """
        Initialize the CpmBeeEmbeddingExt object.

        Args:
            self: The instance of the CpmBeeEmbeddingExt class.
            config (CpmBeeConfig):
                An instance of CpmBeeConfig containing configuration parameters for the embedding.

                - vocab_size (int): The size of the vocabulary.
                - hidden_size (int): The size of the hidden layer.
                - ms_dtype: The data type for model parameters.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__(config.vocab_size, config.hidden_size, dtype=config.ms_dtype)
        self.dim_model = config.hidden_size
        self.rotary_emb = CpmBeeRotaryEmbedding(config)

    def forward(self, ids: mindspore.Tensor, ids_sub: mindspore.Tensor):
        """
        Construct and return the embeddings of the given input IDs and sub-IDs for the CpmBeeEmbeddingExt class.

        Args:
            self (CpmBeeEmbeddingExt): An instance of the CpmBeeEmbeddingExt class.
            ids (mindspore.Tensor):
                The input IDs tensor:

                - Shape: (batch_size, sequence_length).
                - Type: int32 or int64.
                - Purpose: Represent the input IDs for which embeddings need to be forwarded.
            ids_sub (mindspore.Tensor):
                The sub-IDs tensor.

                - Shape: (batch_size, sequence_length).
                - Type: int32 or int64.
                - Purpose: Represent the sub-IDs for modifying the embeddings.

        Returns:
            None.

        Raises:
            None.
        """
        embeds = super().forward(ids) / math.sqrt(self.dim_model)
        return self.rotary_emb(embeds, ids_sub)

    def projection(self, x: mindspore.Tensor, ext_table: Optional[mindspore.Tensor] = None):
        """
        This method projects the input tensor 'x' using a dense layer and optionally concatenates it with another tensor 'ext_table'.

        Args:
            self: Instance of the class CpmBeeEmbeddingExt.
            x (mindspore.Tensor): Input tensor to be projected. It should have a shape compatible with the weight tensor.
            ext_table (Optional[mindspore.Tensor], optional): Additional tensor to be concatenated with the projected tensor 'x'.
                It should have a compatible shape with 'x'. Defaults to None.

        Returns:
            mindspore.Tensor or None: The projected tensor 'x' after applying the dense layer operation.
                If 'ext_table' is provided and has a non-zero shape, the concatenated tensor is returned.

        Raises:
            None
        """
        logits = ops.dense(x / math.sqrt(self.dim_model), self.weight)
        if ext_table is not None and 0 not in ext_table.shape:
            logits_ext = ops.dense(x, ext_table)
            logits = ops.cat([logits, logits_ext], axis=-1)
        return logits

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeEmbeddingExt.__init__(config)

Initialize the CpmBeeEmbeddingExt object.

PARAMETER DESCRIPTION
self

The instance of the CpmBeeEmbeddingExt class.

config

An instance of CpmBeeConfig containing configuration parameters for the embedding.

  • vocab_size (int): The size of the vocabulary.
  • hidden_size (int): The size of the hidden layer.
  • ms_dtype: The data type for model parameters.

TYPE: CpmBeeConfig

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
def __init__(self, config: CpmBeeConfig):
    """
    Initialize the CpmBeeEmbeddingExt object.

    Args:
        self: The instance of the CpmBeeEmbeddingExt class.
        config (CpmBeeConfig):
            An instance of CpmBeeConfig containing configuration parameters for the embedding.

            - vocab_size (int): The size of the vocabulary.
            - hidden_size (int): The size of the hidden layer.
            - ms_dtype: The data type for model parameters.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__(config.vocab_size, config.hidden_size, dtype=config.ms_dtype)
    self.dim_model = config.hidden_size
    self.rotary_emb = CpmBeeRotaryEmbedding(config)

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeEmbeddingExt.forward(ids, ids_sub)

Construct and return the embeddings of the given input IDs and sub-IDs for the CpmBeeEmbeddingExt class.

PARAMETER DESCRIPTION
self

An instance of the CpmBeeEmbeddingExt class.

TYPE: CpmBeeEmbeddingExt

ids

The input IDs tensor:

  • Shape: (batch_size, sequence_length).
  • Type: int32 or int64.
  • Purpose: Represent the input IDs for which embeddings need to be forwarded.

TYPE: Tensor

ids_sub

The sub-IDs tensor.

  • Shape: (batch_size, sequence_length).
  • Type: int32 or int64.
  • Purpose: Represent the sub-IDs for modifying the embeddings.

TYPE: Tensor

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
def forward(self, ids: mindspore.Tensor, ids_sub: mindspore.Tensor):
    """
    Construct and return the embeddings of the given input IDs and sub-IDs for the CpmBeeEmbeddingExt class.

    Args:
        self (CpmBeeEmbeddingExt): An instance of the CpmBeeEmbeddingExt class.
        ids (mindspore.Tensor):
            The input IDs tensor:

            - Shape: (batch_size, sequence_length).
            - Type: int32 or int64.
            - Purpose: Represent the input IDs for which embeddings need to be forwarded.
        ids_sub (mindspore.Tensor):
            The sub-IDs tensor.

            - Shape: (batch_size, sequence_length).
            - Type: int32 or int64.
            - Purpose: Represent the sub-IDs for modifying the embeddings.

    Returns:
        None.

    Raises:
        None.
    """
    embeds = super().forward(ids) / math.sqrt(self.dim_model)
    return self.rotary_emb(embeds, ids_sub)

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeEmbeddingExt.projection(x, ext_table=None)

This method projects the input tensor 'x' using a dense layer and optionally concatenates it with another tensor 'ext_table'.

PARAMETER DESCRIPTION
self

Instance of the class CpmBeeEmbeddingExt.

x

Input tensor to be projected. It should have a shape compatible with the weight tensor.

TYPE: Tensor

ext_table

Additional tensor to be concatenated with the projected tensor 'x'. It should have a compatible shape with 'x'. Defaults to None.

TYPE: Optional[Tensor] DEFAULT: None

RETURNS DESCRIPTION

mindspore.Tensor or None: The projected tensor 'x' after applying the dense layer operation. If 'ext_table' is provided and has a non-zero shape, the concatenated tensor is returned.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
def projection(self, x: mindspore.Tensor, ext_table: Optional[mindspore.Tensor] = None):
    """
    This method projects the input tensor 'x' using a dense layer and optionally concatenates it with another tensor 'ext_table'.

    Args:
        self: Instance of the class CpmBeeEmbeddingExt.
        x (mindspore.Tensor): Input tensor to be projected. It should have a shape compatible with the weight tensor.
        ext_table (Optional[mindspore.Tensor], optional): Additional tensor to be concatenated with the projected tensor 'x'.
            It should have a compatible shape with 'x'. Defaults to None.

    Returns:
        mindspore.Tensor or None: The projected tensor 'x' after applying the dense layer operation.
            If 'ext_table' is provided and has a non-zero shape, the concatenated tensor is returned.

    Raises:
        None
    """
    logits = ops.dense(x / math.sqrt(self.dim_model), self.weight)
    if ext_table is not None and 0 not in ext_table.shape:
        logits_ext = ops.dense(x, ext_table)
        logits = ops.cat([logits, logits_ext], axis=-1)
    return logits

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeEncoder

Bases: Module

CpmBeeEncoder is a class that represents an encoder module for the CpmBeeTransformer model. This class inherits from nn.Module and is responsible for processing input data through multiple transformer blocks.

ATTRIBUTE DESCRIPTION
num_layers

The number of transformer blocks in the encoder.

TYPE: int

layers

List of CpmBeeTransformerBlock instances representing each transformer block in the encoder.

TYPE: ModuleList

output_layernorm

Layer normalization module for the encoder output.

TYPE: CpmBeeLayerNorm

METHOD DESCRIPTION
__init__

Initializes the CpmBeeEncoder instance with the provided configuration.

forward

Processes the input hidden_states through the encoder layers.

Args:

  • hidden_states (mindspore.Tensor): Input tensor of shape (batch, seq_len, dim_model).
  • attention_mask (mindspore.Tensor): Tensor to mask invalid areas during calculation of shape (batch, seq_len, seq_len).
  • position_bias (mindspore.Tensor): Tensor providing position information to the attention mechanism of shape (num_heads, seq_len, seq_len).
  • output_attentions (bool, optional): Indicates whether to return attention tensors of all layers.
  • output_hidden_states (bool, optional): Indicates whether to return hidden states of all layers.
  • past_key_values (Tuple[mindspore.Tensor, mindspore.Tensor], optional): Cached past key and value projection states.
  • use_cache (bool, optional): If True, past key and value states are returned for speeding up decoding.

Returns:

  • mindspore.Tensor: Processed hidden states after passing through all encoder layers.
  • Tuple[mindspore.Tensor, ...]: Cached key values if 'use_cache' is enabled.
  • Tuple[mindspore.Tensor, ...]: Hidden states of all layers if 'output_hidden_states' is enabled.
  • Tuple[mindspore.Tensor, ...]: Attention weights of all layers if 'output_attentions' is enabled.
Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
class CpmBeeEncoder(nn.Module):

    """
    CpmBeeEncoder is a class that represents an encoder module for the CpmBeeTransformer model.
    This class inherits from nn.Module and is responsible for processing input data through multiple transformer blocks.

    Attributes:
        num_layers (int): The number of transformer blocks in the encoder.
        layers (nn.ModuleList): List of CpmBeeTransformerBlock instances representing each transformer block in the encoder.
        output_layernorm (CpmBeeLayerNorm): Layer normalization module for the encoder output.

    Methods:
        __init__:
            Initializes the CpmBeeEncoder instance with the provided configuration.

        forward:
            Processes the input hidden_states through the encoder layers.

             Args:

            - hidden_states (mindspore.Tensor): Input tensor of shape (batch, seq_len, dim_model).
            - attention_mask (mindspore.Tensor):
            Tensor to mask invalid areas during calculation of shape (batch, seq_len, seq_len).
            - position_bias (mindspore.Tensor):
            Tensor providing position information to the attention mechanism of shape (num_heads, seq_len, seq_len).
            - output_attentions (bool, optional): Indicates whether to return attention tensors of all layers.
            - output_hidden_states (bool, optional): Indicates whether to return hidden states of all layers.
            - past_key_values (Tuple[mindspore.Tensor, mindspore.Tensor], optional): Cached past key and value projection states.
            - use_cache (bool, optional): If True, past key and value states are returned for speeding up decoding.

            Returns:

            - mindspore.Tensor: Processed hidden states after passing through all encoder layers.
            - Tuple[mindspore.Tensor, ...]: Cached key values if 'use_cache' is enabled.
            - Tuple[mindspore.Tensor, ...]: Hidden states of all layers if 'output_hidden_states' is enabled.
            - Tuple[mindspore.Tensor, ...]: Attention weights of all layers if 'output_attentions' is enabled.
    """
    def __init__(self, config: CpmBeeConfig):
        """
        Initializes a new instance of the CpmBeeEncoder class.

        Args:
            self: The instance of the CpmBeeEncoder class.
            config (CpmBeeConfig): An instance of the CpmBeeConfig class containing configuration parameters for the encoder.
                This parameter is used to configure the encoder's behavior and settings.
                The config parameter must be of type CpmBeeConfig.

        Returns:
            None.

        Raises:
            AssertionError: If the length of config.mask_modules does not equal the number of hidden layers specified in config.
            AssertionError: If the length of mask_module within config.mask_modules is not 2 for each mask_module in the list.
        """
        super().__init__()
        self.num_layers = config.num_hidden_layers
        if config.mask_modules is not None:
            assert len(config.mask_modules) == self.num_layers, "The total number of masks should equal to num_layers"
            for mask_module in config.mask_modules:
                assert len(mask_module) == 2, "For encoder, each mask should be (mask_att, mask_ffn)"
        else:
            config.mask_modules = [(False, False)] * self.num_layers

        self.layers = nn.ModuleList(
            [
                CpmBeeTransformerBlock(
                    config, mask_att=config.mask_modules[ith][0], mask_ffn=config.mask_modules[ith][1]
                )
                for ith in range(self.num_layers)
            ]
        )

        self.output_layernorm = CpmBeeLayerNorm(config)

    def forward(
        self,
        hidden_states: mindspore.Tensor,
        attention_mask: mindspore.Tensor,
        position_bias: mindspore.Tensor,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        past_key_values: Optional[Tuple[mindspore.Tensor, mindspore.Tensor]] = None,
        use_cache: Optional[bool] = None,
    ):
        """
        Args:
            hidden_states (`mindspore.Tensor`):
                Input to the layer of shape `(batch, seq_len, dim_model)`
            attention_mask (`mindspore.Tensor`):
                Avoid invalid areas to participate in the calculation of shape `(batch, seq_len, seq_len)`
            position_bias (`mindspore.Tensor`):
                Provides position information to attention mechanism of shape `(num_heads, seq_len, seq_len)`
            output_attentions (`bool`, *optional*):
                Whether or not to return the attentions tensors of all attention layers.
            output_hidden_states (`bool`, *optional*):
                Whether or not to return the hidden states of all layers.
            past_key_values (`Tuple[mindspore.Tensor, mindspore.Tensor])`, *optional*):
                Cached past key and value projection states
            use_cache (`bool`, *optional*):
                If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
                (see `past_key_values`).
        """
        all_hidden_states = () if output_hidden_states else None
        all_self_attns = () if output_attentions else None
        current_key_values = () if use_cache else None

        for i, layer in enumerate(self.layers):
            if output_hidden_states:
                all_hidden_states += (hidden_states,)
            layer_outputs = layer(
                hidden_states,
                attention_mask,
                position_bias,
                output_attentions=output_attentions,
                past_key_values=past_key_values[i] if past_key_values else None,
                use_cache=use_cache,
            )
            hidden_states, attn_weights, current_key_value = layer_outputs
            if output_attentions:
                all_self_attns += (attn_weights,)
            if current_key_values is not None:
                current_key_values = current_key_values + (current_key_value,)

        hidden_states = self.output_layernorm(hidden_states)

        if output_hidden_states:
            all_hidden_states += (hidden_states,)

        return hidden_states, current_key_values, all_hidden_states, all_self_attns

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeEncoder.__init__(config)

Initializes a new instance of the CpmBeeEncoder class.

PARAMETER DESCRIPTION
self

The instance of the CpmBeeEncoder class.

config

An instance of the CpmBeeConfig class containing configuration parameters for the encoder. This parameter is used to configure the encoder's behavior and settings. The config parameter must be of type CpmBeeConfig.

TYPE: CpmBeeConfig

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
AssertionError

If the length of config.mask_modules does not equal the number of hidden layers specified in config.

AssertionError

If the length of mask_module within config.mask_modules is not 2 for each mask_module in the list.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
def __init__(self, config: CpmBeeConfig):
    """
    Initializes a new instance of the CpmBeeEncoder class.

    Args:
        self: The instance of the CpmBeeEncoder class.
        config (CpmBeeConfig): An instance of the CpmBeeConfig class containing configuration parameters for the encoder.
            This parameter is used to configure the encoder's behavior and settings.
            The config parameter must be of type CpmBeeConfig.

    Returns:
        None.

    Raises:
        AssertionError: If the length of config.mask_modules does not equal the number of hidden layers specified in config.
        AssertionError: If the length of mask_module within config.mask_modules is not 2 for each mask_module in the list.
    """
    super().__init__()
    self.num_layers = config.num_hidden_layers
    if config.mask_modules is not None:
        assert len(config.mask_modules) == self.num_layers, "The total number of masks should equal to num_layers"
        for mask_module in config.mask_modules:
            assert len(mask_module) == 2, "For encoder, each mask should be (mask_att, mask_ffn)"
    else:
        config.mask_modules = [(False, False)] * self.num_layers

    self.layers = nn.ModuleList(
        [
            CpmBeeTransformerBlock(
                config, mask_att=config.mask_modules[ith][0], mask_ffn=config.mask_modules[ith][1]
            )
            for ith in range(self.num_layers)
        ]
    )

    self.output_layernorm = CpmBeeLayerNorm(config)

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeEncoder.forward(hidden_states, attention_mask, position_bias, output_attentions=None, output_hidden_states=None, past_key_values=None, use_cache=None)

PARAMETER DESCRIPTION
hidden_states

Input to the layer of shape (batch, seq_len, dim_model)

TYPE: `mindspore.Tensor`

attention_mask

Avoid invalid areas to participate in the calculation of shape (batch, seq_len, seq_len)

TYPE: `mindspore.Tensor`

position_bias

Provides position information to attention mechanism of shape (num_heads, seq_len, seq_len)

TYPE: `mindspore.Tensor`

output_attentions

Whether or not to return the attentions tensors of all attention layers.

TYPE: `bool`, *optional* DEFAULT: None

output_hidden_states

Whether or not to return the hidden states of all layers.

TYPE: `bool`, *optional* DEFAULT: None

past_key_values

Cached past key and value projection states

TYPE: `Tuple[mindspore.Tensor, mindspore.Tensor])`, *optional* DEFAULT: None

use_cache

If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).

TYPE: `bool`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
def forward(
    self,
    hidden_states: mindspore.Tensor,
    attention_mask: mindspore.Tensor,
    position_bias: mindspore.Tensor,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    past_key_values: Optional[Tuple[mindspore.Tensor, mindspore.Tensor]] = None,
    use_cache: Optional[bool] = None,
):
    """
    Args:
        hidden_states (`mindspore.Tensor`):
            Input to the layer of shape `(batch, seq_len, dim_model)`
        attention_mask (`mindspore.Tensor`):
            Avoid invalid areas to participate in the calculation of shape `(batch, seq_len, seq_len)`
        position_bias (`mindspore.Tensor`):
            Provides position information to attention mechanism of shape `(num_heads, seq_len, seq_len)`
        output_attentions (`bool`, *optional*):
            Whether or not to return the attentions tensors of all attention layers.
        output_hidden_states (`bool`, *optional*):
            Whether or not to return the hidden states of all layers.
        past_key_values (`Tuple[mindspore.Tensor, mindspore.Tensor])`, *optional*):
            Cached past key and value projection states
        use_cache (`bool`, *optional*):
            If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
            (see `past_key_values`).
    """
    all_hidden_states = () if output_hidden_states else None
    all_self_attns = () if output_attentions else None
    current_key_values = () if use_cache else None

    for i, layer in enumerate(self.layers):
        if output_hidden_states:
            all_hidden_states += (hidden_states,)
        layer_outputs = layer(
            hidden_states,
            attention_mask,
            position_bias,
            output_attentions=output_attentions,
            past_key_values=past_key_values[i] if past_key_values else None,
            use_cache=use_cache,
        )
        hidden_states, attn_weights, current_key_value = layer_outputs
        if output_attentions:
            all_self_attns += (attn_weights,)
        if current_key_values is not None:
            current_key_values = current_key_values + (current_key_value,)

    hidden_states = self.output_layernorm(hidden_states)

    if output_hidden_states:
        all_hidden_states += (hidden_states,)

    return hidden_states, current_key_values, all_hidden_states, all_self_attns

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeFFNBlock

Bases: Module

This class represents a feed-forward block in the CpmBee model. It is used to process hidden states before the feed-forward layer.

The CpmBeeFFNBlock class inherits from nn.Module.

ATTRIBUTE DESCRIPTION
layernorm_before_ffn

An instance of the CpmBeeLayerNorm class that performs layer normalization before the feed-forward layer.

TYPE: CpmBeeLayerNorm

ffn

An instance of the CpmBeeFeedForward class that represents the feed-forward layer.

TYPE: CpmBeeFeedForward

dropout

An optional dropout layer. If None, no dropout is applied.

TYPE: Dropout or None

METHOD DESCRIPTION
__init__

Initializes the CpmBeeFFNBlock object.

forward

Processes the hidden states before the feed-forward layer.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
class CpmBeeFFNBlock(nn.Module):

    """
    This class represents a feed-forward block in the CpmBee model. It is used to process hidden states before the feed-forward layer.

    The CpmBeeFFNBlock class inherits from nn.Module.

    Attributes:
        layernorm_before_ffn (CpmBeeLayerNorm): An instance of the CpmBeeLayerNorm class that performs layer normalization before the feed-forward layer.
        ffn (CpmBeeFeedForward): An instance of the CpmBeeFeedForward class that represents the feed-forward layer.
        dropout (nn.Dropout or None): An optional dropout layer. If None, no dropout is applied.

    Methods:
        __init__: Initializes the CpmBeeFFNBlock object.
        forward: Processes the hidden states before the feed-forward layer.

    """
    def __init__(self, config: CpmBeeConfig):
        """
        Initializes a CpmBeeFFNBlock instance.

        Args:
            self: The current object instance.
            config (CpmBeeConfig): The configuration object containing the parameters for the CpmBeeFFNBlock.
                This object must be an instance of CpmBeeConfig class.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__()
        self.layernorm_before_ffn = CpmBeeLayerNorm(config)
        self.ffn = CpmBeeFeedForward(config)
        if config.dropout_p:
            self.dropout = nn.Dropout(p=config.dropout_p)
        else:
            self.dropout = None

    def forward(
        self,
        hidden_states: mindspore.Tensor,
    ):
        """
        Args:
            hidden_states (`mindspore.Tensor` of shape `(batch, len_seq, dim_model)`):
                Hidden states before feed forward layer.
        """
        ln_outputs = self.layernorm_before_ffn(hidden_states)
        outputs = self.ffn(ln_outputs)
        if self.dropout is not None:
            outputs = self.dropout(outputs)
        hidden_states = (hidden_states + outputs) / 1.05
        return hidden_states

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeFFNBlock.__init__(config)

Initializes a CpmBeeFFNBlock instance.

PARAMETER DESCRIPTION
self

The current object instance.

config

The configuration object containing the parameters for the CpmBeeFFNBlock. This object must be an instance of CpmBeeConfig class.

TYPE: CpmBeeConfig

RETURNS DESCRIPTION

None.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
def __init__(self, config: CpmBeeConfig):
    """
    Initializes a CpmBeeFFNBlock instance.

    Args:
        self: The current object instance.
        config (CpmBeeConfig): The configuration object containing the parameters for the CpmBeeFFNBlock.
            This object must be an instance of CpmBeeConfig class.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__()
    self.layernorm_before_ffn = CpmBeeLayerNorm(config)
    self.ffn = CpmBeeFeedForward(config)
    if config.dropout_p:
        self.dropout = nn.Dropout(p=config.dropout_p)
    else:
        self.dropout = None

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeFFNBlock.forward(hidden_states)

PARAMETER DESCRIPTION
hidden_states

Hidden states before feed forward layer.

TYPE: `mindspore.Tensor` of shape `(batch, len_seq, dim_model)`

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
def forward(
    self,
    hidden_states: mindspore.Tensor,
):
    """
    Args:
        hidden_states (`mindspore.Tensor` of shape `(batch, len_seq, dim_model)`):
            Hidden states before feed forward layer.
    """
    ln_outputs = self.layernorm_before_ffn(hidden_states)
    outputs = self.ffn(ln_outputs)
    if self.dropout is not None:
        outputs = self.dropout(outputs)
    hidden_states = (hidden_states + outputs) / 1.05
    return hidden_states

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeFeedForward

Bases: Module

This class represents a feedforward neural network layer for the CpmBee model. It consists of a dense gated activation layer (CpmBeeDenseGatedACT), optional dropout layer, and a linear transformation layer (CpmBeeLinear).

ATTRIBUTE DESCRIPTION
w_in

Instance of CpmBeeDenseGatedACT for processing input hidden states.

dropout

Optional dropout layer for regularization.

w_out

Instance of CpmBeeLinear for transforming hidden states to output.

METHOD DESCRIPTION
__init__

Constructor method initializing the feedforward layer.

forward

Method for processing input hidden states through the feedforward layer.

PARAMETER DESCRIPTION
config

Configuration object of type CpmBeeConfig containing layer specifications.

TYPE: CpmBeeConfig

hidden_states

Input tensor of shape (batch, seq_len, dim_in) representing hidden states.

RETURNS DESCRIPTION

mindspore.Tensor: Transformed hidden states after passing through the feedforward layer.

Source code in mindnlp/transformers/models/cpmbee/modeling_cpmbee.py
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
class CpmBeeFeedForward(nn.Module):

    """
    This class represents a feedforward neural network layer for the CpmBee model.
    It consists of a dense gated activation layer (`CpmBeeDenseGatedACT`), optional dropout layer,
    and a linear transformation layer (`CpmBeeLinear`).

    Attributes:
        w_in: Instance of `CpmBeeDenseGatedACT` for processing input hidden states.
        dropout: Optional dropout layer for regularization.
        w_out: Instance of `CpmBeeLinear` for transforming hidden states to output.

    Methods:
        __init__: Constructor method initializing the feedforward layer.
        forward: Method for processing input hidden states through the feedforward layer.

    Args:
        config: Configuration object of type `CpmBeeConfig` containing layer specifications.
        hidden_states: Input tensor of shape `(batch, seq_len, dim_in)` representing hidden states.

    Returns:
        mindspore.Tensor: Transformed hidden states after passing through the feedforward layer.
    """
    def __init__(self, config: CpmBeeConfig):
        """
        Initializes an instance of the CpmBeeFeedForward class.

        Args:
            self: The instance of the class.
            config (CpmBeeConfig): An object of the CpmBeeConfig class containing configuration parameters.

        Returns:
            None

        Raises:
            None
        """
        super().__init__()
        self.w_in = CpmBeeDenseGatedACT(config)
        if config.dropout_p is not None:
            self.dropout = nn.Dropout(p=config.dropout_p)
        else:
            self.dropout = None

        self.w_out = CpmBeeLinear(config.dim_ff, config.hidden_size, dtype=config.ms_dtype)

    def forward(self, hidden_states: mindspore.Tensor):
        """
        Args:
            hidden_states (`mindspore.Tensor` of shape `(batch, seq_len, dim_in)`)
        """
        hidden_states = self.w_in(hidden_states)

        if self.dropout is not None:
            hidden_states = self.dropout(hidden_states)

        hidden_states = self.w_out(hidden_states)

        return hidden_states

mindnlp.transformers.models.cpmbee.modeling_cpmbee.CpmBeeFeedForward.__init__(config)

Initializes an instance of the CpmBeeFeedForward class.

PARAMETER DESCRIPTION
self

The instance of the class.

config

An object of the CpmBeeConfig class containing configuration parameters.