Skip to content

seggpt

mindnlp.transformers.models.seggpt.configuration_seggpt

SegGpt model configuration

mindnlp.transformers.models.seggpt.configuration_seggpt.SegGptConfig

Bases: PretrainedConfig

This is the configuration class to store the configuration of a [SegGptModel]. It is used to instantiate a SegGPT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SegGPT BAAI/seggpt-vit-large architecture.

Configuration objects inherit from [PretrainedConfig] and can be used to control the model outputs. Read the documentation from [PretrainedConfig] for more information.

PARAMETER DESCRIPTION
hidden_size

Dimensionality of the encoder layers and the pooler layer.

TYPE: `int`, *optional*, defaults to 1024 DEFAULT: 1024

num_hidden_layers

Number of hidden layers in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 24 DEFAULT: 24

num_attention_heads

Number of attention heads for each attention layer in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 16 DEFAULT: 16

hidden_act

The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.

TYPE: `str` or `function`, *optional*, defaults to `"gelu"` DEFAULT: 'gelu'

hidden_dropout_prob

The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

TYPE: `float`, *optional*, defaults to 0.0 DEFAULT: 0.0

initializer_range

The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

TYPE: `float`, *optional*, defaults to 0.02 DEFAULT: 0.02

layer_norm_eps

The epsilon used by the layer normalization layers.

TYPE: `float`, *optional*, defaults to 1e-06 DEFAULT: 1e-06

image_size

The size (resolution) of each image.

TYPE: `List[int]`, *optional*, defaults to `[896, 448]` DEFAULT: [896, 448]

patch_size

The size (resolution) of each patch.

TYPE: `int`, *optional*, defaults to 16 DEFAULT: 16

num_channels

The number of input channels.

TYPE: `int`, *optional*, defaults to 3 DEFAULT: 3

qkv_bias

Whether to add a bias to the queries, keys and values.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

mlp_dim

The dimensionality of the MLP layer in the Transformer encoder. If unset, defaults to hidden_size * 4.

TYPE: `int`, *optional* DEFAULT: None

drop_path_rate

The drop path rate for the dropout layers.

TYPE: `float`, *optional*, defaults to 0.1 DEFAULT: 0.1

pretrain_image_size

The pretrained size of the absolute position embeddings.

TYPE: `int`, *optional*, defaults to 224 DEFAULT: 224

decoder_hidden_size

Hidden size for decoder.

TYPE: `int`, *optional*, defaults to 64 DEFAULT: 64

use_relative_position_embeddings

Whether to use relative position embeddings in the attention layers.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

merge_index

The index of the encoder layer to merge the embeddings.

TYPE: `int`, *optional*, defaults to 2 DEFAULT: 2

intermediate_hidden_state_indices

The indices of the encoder layers which we store as features for the decoder.

TYPE: `List[int]`, *optional*, defaults to `[5, 11, 17, 23]` DEFAULT: [5, 11, 17, 23]

beta

Regularization factor for SegGptLoss (smooth-l1 loss).

TYPE: `float`, *optional*, defaults to 0.01 DEFAULT: 0.01

Example
>>> from transformers import SegGptConfig, SegGptModel
...
>>> # Initializing a SegGPT seggpt-vit-large style configuration
>>> configuration = SegGptConfig()
...
>>> # Initializing a model (with random weights) from the seggpt-vit-large style configuration
>>> model = SegGptModel(configuration)
...
>>> # Accessing the model configuration
>>> configuration = model.config
Source code in mindnlp/transformers/models/seggpt/configuration_seggpt.py
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
class SegGptConfig(PretrainedConfig):
    r"""
    This is the configuration class to store the configuration of a [`SegGptModel`]. It is used to instantiate a SegGPT
    model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
    defaults will yield a similar configuration to that of the SegGPT
    [BAAI/seggpt-vit-large](https://huggingface.co/BAAI/seggpt-vit-large) architecture.

    Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
    documentation from [`PretrainedConfig`] for more information.

    Args:
        hidden_size (`int`, *optional*, defaults to 1024):
            Dimensionality of the encoder layers and the pooler layer.
        num_hidden_layers (`int`, *optional*, defaults to 24):
            Number of hidden layers in the Transformer encoder.
        num_attention_heads (`int`, *optional*, defaults to 16):
            Number of attention heads for each attention layer in the Transformer encoder.
        hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
            The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
            `"relu"`, `"selu"` and `"gelu_new"` are supported.
        hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
            The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
        initializer_range (`float`, *optional*, defaults to 0.02):
            The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
        layer_norm_eps (`float`, *optional*, defaults to 1e-06):
            The epsilon used by the layer normalization layers.
        image_size (`List[int]`, *optional*, defaults to `[896, 448]`):
            The size (resolution) of each image.
        patch_size (`int`, *optional*, defaults to 16):
            The size (resolution) of each patch.
        num_channels (`int`, *optional*, defaults to 3):
            The number of input channels.
        qkv_bias (`bool`, *optional*, defaults to `True`):
            Whether to add a bias to the queries, keys and values.
        mlp_dim (`int`, *optional*):
            The dimensionality of the MLP layer in the Transformer encoder. If unset, defaults to
            `hidden_size` * 4.
        drop_path_rate (`float`, *optional*, defaults to 0.1):
            The drop path rate for the dropout layers.
        pretrain_image_size (`int`, *optional*, defaults to 224):
            The pretrained size of the absolute position embeddings.
        decoder_hidden_size (`int`, *optional*, defaults to 64):
            Hidden size for decoder.
        use_relative_position_embeddings (`bool`, *optional*, defaults to `True`):
            Whether to use relative position embeddings in the attention layers.
        merge_index (`int`, *optional*, defaults to 2):
            The index of the encoder layer to merge the embeddings.
        intermediate_hidden_state_indices (`List[int]`, *optional*, defaults to `[5, 11, 17, 23]`):
            The indices of the encoder layers which we store as features for the decoder.
        beta (`float`, *optional*, defaults to 0.01):
            Regularization factor for SegGptLoss (smooth-l1 loss).

    Example:
        ```python
        >>> from transformers import SegGptConfig, SegGptModel
        ...
        >>> # Initializing a SegGPT seggpt-vit-large style configuration
        >>> configuration = SegGptConfig()
        ...
        >>> # Initializing a model (with random weights) from the seggpt-vit-large style configuration
        >>> model = SegGptModel(configuration)
        ...
        >>> # Accessing the model configuration
        >>> configuration = model.config
        ```
    """

    model_type = "seggpt"

    def __init__(
        self,
        hidden_size=1024,
        num_hidden_layers=24,
        num_attention_heads=16,
        hidden_act="gelu",
        hidden_dropout_prob=0.0,
        initializer_range=0.02,
        layer_norm_eps=1e-6,
        image_size=[896, 448],
        patch_size=16,
        num_channels=3,
        qkv_bias=True,
        mlp_dim=None,
        drop_path_rate=0.1,
        pretrain_image_size=224,
        decoder_hidden_size=64,
        use_relative_position_embeddings=True,
        merge_index=2,
        intermediate_hidden_state_indices=[5, 11, 17, 23],
        beta=0.01,
        **kwargs,
    ):
        super().__init__(**kwargs)

        if merge_index > min(intermediate_hidden_state_indices):
            raise ValueError(
                f"Merge index must be less than the minimum encoder output index, but got {merge_index} and {intermediate_hidden_state_indices}"
            )
        self.hidden_size = hidden_size
        self.num_hidden_layers = num_hidden_layers
        self.num_attention_heads = num_attention_heads
        self.hidden_act = hidden_act
        self.hidden_dropout_prob = hidden_dropout_prob
        self.initializer_range = initializer_range
        self.layer_norm_eps = layer_norm_eps
        self.image_size = image_size
        self.patch_size = patch_size
        self.num_channels = num_channels
        self.qkv_bias = qkv_bias
        self.drop_path_rate = drop_path_rate
        self.pretrain_image_size = pretrain_image_size
        self.decoder_hidden_size = decoder_hidden_size
        self.use_relative_position_embeddings = use_relative_position_embeddings
        self.merge_index = merge_index
        self.intermediate_hidden_state_indices = intermediate_hidden_state_indices
        self.beta = beta
        self.mlp_dim = int(hidden_size * 4) if mlp_dim is None else mlp_dim

mindnlp.transformers.models.seggpt.image_processing_seggpt

Image processor class for SegGPT.

mindnlp.transformers.models.seggpt.image_processing_seggpt.SegGptImageProcessor

Bases: BaseImageProcessor

Constructs a SegGpt image processor.

PARAMETER DESCRIPTION
do_resize

Whether to resize the image's (height, width) dimensions to the specified (size["height"], size["width"]). Can be overridden by the do_resize parameter in the preprocess method.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

size

Size of the output image after resizing. Can be overridden by the size parameter in the preprocess method.

TYPE: `dict`, *optional*, defaults to `{"height" -- 448, "width" -- 448}` DEFAULT: None

resample

Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the preprocess method.

TYPE: `PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC` DEFAULT: BICUBIC

do_rescale

Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in the preprocess method.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

rescale_factor

Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the preprocess method.

TYPE: `int` or `float`, *optional*, defaults to `1/255` DEFAULT: 1 / 255

do_normalize

Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

image_mean

Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method.

TYPE: `float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN` DEFAULT: None

image_std

Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.

TYPE: `float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD` DEFAULT: None

do_convert_rgb

Whether to convert the prompt mask to RGB format. Can be overridden by the do_convert_rgb parameter in the preprocess method.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

Source code in mindnlp/transformers/models/seggpt/image_processing_seggpt.py
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
class SegGptImageProcessor(BaseImageProcessor):
    r"""
    Constructs a SegGpt image processor.

    Args:
        do_resize (`bool`, *optional*, defaults to `True`):
            Whether to resize the image's (height, width) dimensions to the specified `(size["height"],
            size["width"])`. Can be overridden by the `do_resize` parameter in the `preprocess` method.
        size (`dict`, *optional*, defaults to `{"height" -- 448, "width" -- 448}`):
            Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess`
            method.
        resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
            Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the
            `preprocess` method.
        do_rescale (`bool`, *optional*, defaults to `True`):
            Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale`
            parameter in the `preprocess` method.
        rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
            Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the
            `preprocess` method.
        do_normalize (`bool`, *optional*, defaults to `True`):
            Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
            method.
        image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_MEAN`):
            Mean to use if normalizing the image. This is a float or list of floats the length of the number of
            channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
        image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_DEFAULT_STD`):
            Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
            number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
        do_convert_rgb (`bool`, *optional*, defaults to `True`):
            Whether to convert the prompt mask to RGB format. Can be overridden by the `do_convert_rgb` parameter in the
            `preprocess` method.
    """

    model_input_names = ["pixel_values"]

    def __init__(
        self,
        do_resize: bool = True,
        size: Optional[Dict[str, int]] = None,
        resample: PILImageResampling = PILImageResampling.BICUBIC,
        do_rescale: bool = True,
        rescale_factor: Union[int, float] = 1 / 255,
        do_normalize: bool = True,
        image_mean: Optional[Union[float, List[float]]] = None,
        image_std: Optional[Union[float, List[float]]] = None,
        do_convert_rgb: bool = True,
        **kwargs,
    ) -> None:
        super().__init__(**kwargs)
        size = size if size is not None else {"height": 448, "width": 448}
        size = get_size_dict(size)
        self.do_resize = do_resize
        self.do_rescale = do_rescale
        self.do_normalize = do_normalize
        self.size = size
        self.resample = resample
        self.rescale_factor = rescale_factor
        self.image_mean = image_mean if image_mean is not None else IMAGENET_DEFAULT_MEAN
        self.image_std = image_std if image_std is not None else IMAGENET_DEFAULT_STD
        self.do_convert_rgb = do_convert_rgb

    def get_palette(self, num_labels: int) -> List[Tuple[int, int]]:
        """Build a palette to map the prompt mask from a single channel to a 3 channel RGB.

        Args:
            num_labels (`int`):
                Number of classes in the segmentation task (excluding the background).

        Returns:
            `List[Tuple[int, int]]`: Palette to map the prompt mask from a single channel to a 3 channel RGB.
        """
        return build_palette(num_labels)

    def mask_to_rgb(
        self,
        image: np.ndarray,
        palette: Optional[List[Tuple[int, int]]] = None,
        data_format: Optional[Union[str, ChannelDimension]] = None,
    ) -> np.ndarray:
        """Converts a segmentation map to RGB format.

        Args:
            image (`np.ndarray`):
                Segmentation map with dimensions (height, width) where pixel values represent the class index.
            palette (`List[Tuple[int, int]]`, *optional*, defaults to `None`):
                Palette to use to convert the mask to RGB format. If unset, the mask is duplicated across the channel
                dimension.
            data_format (`ChannelDimension` or `str`, *optional*):
                The channel dimension format for the output image. If unset, the channel dimension format of the input
                image is used. Can be one of:

                - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
                - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.

        Returns:
            `np.ndarray`: The mask in RGB format.
        """
        return mask_to_rgb(image, palette=palette, data_format=data_format)

    # Copied from transformers.models.vit.image_processing_vit.ViTImageProcessor.resize with PILImageResampling.BILINEAR->PILImageResampling.BICUBIC
    def resize(
        self,
        image: np.ndarray,
        size: Dict[str, int],
        resample: PILImageResampling = PILImageResampling.BICUBIC,
        data_format: Optional[Union[str, ChannelDimension]] = None,
        input_data_format: Optional[Union[str, ChannelDimension]] = None,
        **kwargs,
    ) -> np.ndarray:
        """
        Resize an image to `(size["height"], size["width"])`.

        Args:
            image (`np.ndarray`):
                Image to resize.
            size (`Dict[str, int]`):
                Dictionary in the format `{"height": int, "width": int}` specifying the size of the output image.
            resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`):
                `PILImageResampling` filter to use when resizing the image e.g. `PILImageResampling.BICUBIC`.
            data_format (`ChannelDimension` or `str`, *optional*):
                The channel dimension format for the output image. If unset, the channel dimension format of the input
                image is used. Can be one of:

                - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
                - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
                - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
            input_data_format (`ChannelDimension` or `str`, *optional*):
                The channel dimension format for the input image. If unset, the channel dimension format is inferred
                from the input image. Can be one of:

                - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
                - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
                - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.

        Returns:
            `np.ndarray`: The resized image.
        """
        size = get_size_dict(size)
        if "height" not in size or "width" not in size:
            raise ValueError(
                f"The `size` dictionary must contain the keys `height` and `width`. Got {size.keys()}")
        output_size = (size["height"], size["width"])
        return resize(
            image,
            size=output_size,
            resample=resample,
            data_format=data_format,
            input_data_format=input_data_format,
            **kwargs,
        )

    def _preprocess_step(
        self,
        images: ImageInput,
        do_resize: Optional[bool] = None,
        size: Dict[str, int] = None,
        resample: PILImageResampling = None,
        do_rescale: Optional[bool] = None,
        rescale_factor: Optional[float] = None,
        do_normalize: Optional[bool] = None,
        image_mean: Optional[Union[float, List[float]]] = None,
        image_std: Optional[Union[float, List[float]]] = None,
        data_format: Union[str, ChannelDimension] = ChannelDimension.FIRST,
        input_data_format: Optional[Union[str, ChannelDimension]] = None,
        do_convert_rgb: Optional[bool] = None,
        num_labels: Optional[int] = None,
        **kwargs,
    ):
        """
        Preprocess an image or batch of images.

        Args:
            images (`ImageInput`):
                Image to _preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
                passing in images with pixel values between 0 and 1, set `do_rescale=False`.
            do_resize (`bool`, *optional*, defaults to `self.do_resize`):
                Whether to resize the image.
            size (`Dict[str, int]`, *optional*, defaults to `self.size`):
                Dictionary in the format `{"height": h, "width": w}` specifying the size of the output image after
                resizing.
            resample (`PILImageResampling` filter, *optional*, defaults to `self.resample`):
                `PILImageResampling` filter to use if resizing the image e.g. `PILImageResampling.BICUBIC`. Only has
                an effect if `do_resize` is set to `True`.
            do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
                Whether to rescale the image values between [0 - 1].
            rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`):
                Rescale factor to rescale the image by if `do_rescale` is set to `True`.
            do_normalize (`bool`, *optional*, defaults to `self.do_normalize`):
                Whether to normalize the image.
            image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
                Image mean to use if `do_normalize` is set to `True`.
            image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
                Image standard deviation to use if `do_normalize` is set to `True`.
            return_tensors (`str` or `TensorType`, *optional*):
                The type of tensors to return. Can be one of:

                - Unset: Return a list of `np.ndarray`.
                - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
                - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
                - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
                - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
            data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`):
                The channel dimension format for the output image. Can be one of:

                - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
                - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
                - Unset: Use the channel dimension format of the input image.
            input_data_format (`ChannelDimension` or `str`, *optional*):
                The channel dimension format for the input image. If unset, the channel dimension format is inferred
                from the input image. Can be one of:

                - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
                - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
                - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
            do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
                Whether to convert the prompt mask to RGB format. If `num_labels` is specified, a palette will be built
                to map the prompt mask from a single channel to a 3 channel RGB. If unset, the prompt mask is duplicated
                across the channel dimension. Must be set to `False` if the prompt mask is already in RGB format.
            num_labels: (`int`, *optional*):
                Number of classes in the segmentation task (excluding the background). If specified, a palette will be
                built, assuming that class_idx 0 is the background, to map the prompt mask from a single class_idx
                channel to a 3 channel RGB. Not specifying this will result in the prompt mask either being passed
                through as is if it is already in RGB format or being duplicated across the channel dimension.
        """
        do_resize = do_resize if do_resize is not None else self.do_resize
        do_rescale = do_rescale if do_rescale is not None else self.do_rescale
        do_normalize = do_normalize if do_normalize is not None else self.do_normalize
        do_convert_rgb = do_convert_rgb if do_convert_rgb is not None else self.do_convert_rgb
        resample = resample if resample is not None else self.resample
        rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor
        image_mean = image_mean if image_mean is not None else self.image_mean
        image_std = image_std if image_std is not None else self.image_std

        size = size if size is not None else self.size
        size_dict = get_size_dict(size)

        # If segmentation map is passed we expect 2D images
        images = make_list_of_images(
            images, expected_ndims=2 if do_convert_rgb else 3)

        if not valid_images(images):
            raise ValueError(
                "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, "
                "torch.Tensor, tf.Tensor or jax.ndarray."
            )

        if do_resize and size is None:
            raise ValueError("Size must be specified if do_resize is True.")

        if do_rescale and rescale_factor is None:
            raise ValueError(
                "Rescale factor must be specified if do_rescale is True.")

        if do_normalize and (image_mean is None or image_std is None):
            raise ValueError(
                "Image mean and std must be specified if do_normalize is True.")

        # All transformations expect numpy arrays.
        images = [to_numpy_array(image) for image in images]

        if is_scaled_image(images[0]) and do_rescale:
            logger.warning_once(
                "It looks like you are trying to rescale already rescaled images. If the input"
                " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again."
            )

        if input_data_format is None and not do_convert_rgb:
            # We assume that all images have the same channel dimension format.
            input_data_format = infer_channel_dimension_format(images[0])

        if do_convert_rgb:
            palette = self.get_palette(
                num_labels) if num_labels is not None else None
            # Since this is the input for the next transformations its format should be the same as the input_data_format
            images = [
                self.mask_to_rgb(image=image, palette=palette, data_format=ChannelDimension.FIRST) for image in images
            ]
            input_data_format = ChannelDimension.FIRST

        if do_resize:
            images = [
                self.resize(image=image, size=size_dict,
                            resample=resample, input_data_format=input_data_format)
                for image in images
            ]

        if do_rescale:
            images = [
                self.rescale(image=image, scale=rescale_factor,
                             input_data_format=input_data_format)
                for image in images
            ]

        if do_normalize:
            images = [
                self.normalize(image=image, mean=image_mean,
                               std=image_std, input_data_format=input_data_format)
                for image in images
            ]

        images = [
            to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) for image in images
        ]

        return images

    def preprocess(
        self,
        images: Optional[ImageInput] = None,
        prompt_images: Optional[ImageInput] = None,
        prompt_masks: Optional[ImageInput] = None,
        do_resize: Optional[bool] = None,
        size: Dict[str, int] = None,
        resample: PILImageResampling = None,
        do_rescale: Optional[bool] = None,
        rescale_factor: Optional[float] = None,
        do_normalize: Optional[bool] = None,
        image_mean: Optional[Union[float, List[float]]] = None,
        image_std: Optional[Union[float, List[float]]] = None,
        do_convert_rgb: Optional[bool] = None,
        num_labels: Optional[int] = None,
        return_tensors: Optional[Union[str, TensorType]] = None,
        data_format: Union[str, ChannelDimension] = ChannelDimension.FIRST,
        input_data_format: Optional[Union[str, ChannelDimension]] = None,
        **kwargs,
    ):
        """
        Preprocess an image or batch of images.

        Args:
            images (`ImageInput`):
                Image to _preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
                passing in images with pixel values between 0 and 1, set `do_rescale=False`.
            prompt_images (`ImageInput`):
                Prompt image to _preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
                passing in images with pixel values between 0 and 1, set `do_rescale=False`.
            prompt_masks (`ImageInput`):
                Prompt mask from prompt image to _preprocess that specify prompt_masks value in the preprocessed output.
                Can either be in the format of segmentation maps (no channels) or RGB images.

                - If in the format of RGB images, `do_convert_rgb` should be set to `False`.
                - If in the format of segmentation maps, `num_labels` specifying `num_labels` is recommended to build a
                palette to map the prompt mask from a single channel to a 3 channel RGB.
                - If `num_labels` is not specified, the prompt mask will be duplicated across the channel dimension.
            do_resize (`bool`, *optional*, defaults to `self.do_resize`):
                Whether to resize the image.
            size (`Dict[str, int]`, *optional*, defaults to `self.size`):
                Dictionary in the format `{"height": h, "width": w}` specifying the size of the output image after
                resizing.
            resample (`PILImageResampling` filter, *optional*, defaults to `self.resample`):
                `PILImageResampling` filter to use if resizing the image e.g. `PILImageResampling.BICUBIC`. Only has
                an effect if `do_resize` is set to `True`. Doesn't apply to prompt mask as it is resized using nearest.
            do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
                Whether to rescale the image values between [0 - 1].
            rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`):
                Rescale factor to rescale the image by if `do_rescale` is set to `True`.
            do_normalize (`bool`, *optional*, defaults to `self.do_normalize`):
                Whether to normalize the image.
            image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
                Image mean to use if `do_normalize` is set to `True`.
            image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
                Image standard deviation to use if `do_normalize` is set to `True`.
            do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
                Whether to convert the prompt mask to RGB format. If `num_labels` is specified, a palette will be built
                to map the prompt mask from a single channel to a 3 channel RGB. If unset, the prompt mask is duplicated
                across the channel dimension. Must be set to `False` if the prompt mask is already in RGB format.
            num_labels: (`int`, *optional*):
                Number of classes in the segmentation task (excluding the background). If specified, a palette will be
                built, assuming that class_idx 0 is the background, to map the prompt mask from a plain segmentation map
                with no channels to a 3 channel RGB. Not specifying this will result in the prompt mask either being passed
                through as is if it is already in RGB format (if `do_convert_rgb` is false) or being duplicated
                across the channel dimension.
            return_tensors (`str` or `TensorType`, *optional*):
                The type of tensors to return. Can be one of:

                - Unset: Return a list of `np.ndarray`.
                - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
                - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
                - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
                - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
            data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`):
                The channel dimension format for the output image. Can be one of:

                - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
                - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
                - Unset: Use the channel dimension format of the input image.
            input_data_format (`ChannelDimension` or `str`, *optional*):
                The channel dimension format for the input image. If unset, the channel dimension format is inferred
                from the input image. Can be one of:

                - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
                - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
                - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
        """
        if all(v is None for v in [images, prompt_images, prompt_masks]):
            raise ValueError(
                "At least one of images, prompt_images, prompt_masks must be specified.")

        data = {}

        if images is not None:
            images = self._preprocess_step(
                images,
                is_mask=False,
                do_resize=do_resize,
                size=size,
                resample=resample,
                do_rescale=do_rescale,
                rescale_factor=rescale_factor,
                do_normalize=do_normalize,
                image_mean=image_mean,
                image_std=image_std,
                do_convert_rgb=False,
                data_format=data_format,
                input_data_format=input_data_format,
                **kwargs,
            )

            data["pixel_values"] = images

        if prompt_images is not None:
            prompt_images = self._preprocess_step(
                prompt_images,
                is_mask=False,
                do_resize=do_resize,
                size=size,
                resample=resample,
                do_rescale=do_rescale,
                rescale_factor=rescale_factor,
                do_normalize=do_normalize,
                image_mean=image_mean,
                image_std=image_std,
                do_convert_rgb=False,
                data_format=data_format,
                input_data_format=input_data_format,
                **kwargs,
            )

            data["prompt_pixel_values"] = prompt_images

        if prompt_masks is not None:
            prompt_masks = self._preprocess_step(
                prompt_masks,
                do_resize=do_resize,
                size=size,
                resample=PILImageResampling.NEAREST,
                do_rescale=do_rescale,
                rescale_factor=rescale_factor,
                do_normalize=do_normalize,
                image_mean=image_mean,
                image_std=image_std,
                do_convert_rgb=do_convert_rgb,
                num_labels=num_labels,
                data_format=data_format,
                input_data_format=input_data_format,
                **kwargs,
            )

            data["prompt_masks"] = prompt_masks

        return BatchFeature(data=data, tensor_type=return_tensors)

    def post_process_semantic_segmentation(
        self, outputs, target_sizes: Optional[List[Tuple[int, int]]] = None, num_labels: Optional[int] = None
    ):
        """
        Converts the output of [`SegGptImageSegmentationOutput`] into segmentation maps. Only supports
        PyTorch.

        Args:
            outputs ([`SegGptImageSegmentationOutput`]):
                Raw outputs of the model.
            target_sizes (`List[Tuple[int, int]]`, *optional*):
                List of length (batch_size), where each list item (`Tuple[int, int]`) corresponds to the requested
                final size (height, width) of each prediction. If left to None, predictions will not be resized.
            num_labels (`int`, *optional*):
                Number of classes in the segmentation task (excluding the background). If specified, a palette will be
                built, assuming that class_idx 0 is the background, to map prediction masks from RGB values to class
                indices. This value should be the same used when preprocessing inputs.
        Returns:
            semantic_segmentation: `List[torch.Tensor]` of length `batch_size`, where each item is a semantic
                segmentation map of shape (height, width) corresponding to the target_sizes entry (if `target_sizes` is
                specified). Each entry of each `torch.Tensor` correspond to a semantic class id.
        """
        requires_backends(self, ["mindspore"])
        # batch_size x num_channels x 2*height x width
        masks = outputs.pred_masks

        # Predicted mask and prompt are concatenated in the height dimension
        # batch_size x num_channels x height x width
        masks = masks[:, :, masks.shape[2] // 2:, :]

        # To unnormalize we need to permute to channel last
        # batch_size x height x width x num_channels
        std = ms.Tensor(self.image_std)
        mean = ms.Tensor(self.image_mean)

        masks = masks.permute(0, 2, 3, 1) * std + mean

        # batch_size x num_channels x height x width
        masks = masks.permute(0, 3, 1, 2)

        # Clip to match with palette if specified
        masks = ops.clip(masks * 255, 0, 255)

        semantic_segmentation = []
        palette_tensor = None
        palette = self.get_palette(
            num_labels) if num_labels is not None else None
        if palette is not None:
            palette_tensor = ms.Tensor(palette).float()
            _, num_channels, _, _ = masks.shape
            palette_tensor = palette_tensor.view(
                1, 1, num_labels + 1, num_channels)

        for idx, mask in enumerate(masks):
            if target_sizes is not None:
                mask = ops.interpolate(
                    mask.unsqueeze(0),
                    size=target_sizes[idx],
                    mode="nearest",
                )[0]

            if num_labels is not None:
                channels, height, width = mask.shape
                dist = mask.permute(1, 2, 0).view(height, width, 1, channels)
                dist = dist - palette_tensor
                dist = ops.pow(dist, 2)
                dist = ops.sum(dist, dim=-1)
                pred = dist.argmin(axis=-1)

            else:
                # If no palette is specified SegGpt will try to paint using the mask class idx as RGB
                pred = mask.mean(axis=0).int()

            semantic_segmentation.append(pred)

        return semantic_segmentation

mindnlp.transformers.models.seggpt.image_processing_seggpt.SegGptImageProcessor.get_palette(num_labels)

Build a palette to map the prompt mask from a single channel to a 3 channel RGB.

PARAMETER DESCRIPTION
num_labels

Number of classes in the segmentation task (excluding the background).

TYPE: `int`

RETURNS DESCRIPTION
List[Tuple[int, int]]

List[Tuple[int, int]]: Palette to map the prompt mask from a single channel to a 3 channel RGB.

Source code in mindnlp/transformers/models/seggpt/image_processing_seggpt.py
160
161
162
163
164
165
166
167
168
169
170
def get_palette(self, num_labels: int) -> List[Tuple[int, int]]:
    """Build a palette to map the prompt mask from a single channel to a 3 channel RGB.

    Args:
        num_labels (`int`):
            Number of classes in the segmentation task (excluding the background).

    Returns:
        `List[Tuple[int, int]]`: Palette to map the prompt mask from a single channel to a 3 channel RGB.
    """
    return build_palette(num_labels)

mindnlp.transformers.models.seggpt.image_processing_seggpt.SegGptImageProcessor.mask_to_rgb(image, palette=None, data_format=None)

Converts a segmentation map to RGB format.

PARAMETER DESCRIPTION
image

Segmentation map with dimensions (height, width) where pixel values represent the class index.

TYPE: `np.ndarray`

palette

Palette to use to convert the mask to RGB format. If unset, the mask is duplicated across the channel dimension.

TYPE: `List[Tuple[int, int]]`, *optional*, defaults to `None` DEFAULT: None

data_format

The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. Can be one of:

  • "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
  • "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.

TYPE: `ChannelDimension` or `str`, *optional* DEFAULT: None

RETURNS DESCRIPTION
ndarray

np.ndarray: The mask in RGB format.

Source code in mindnlp/transformers/models/seggpt/image_processing_seggpt.py
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
def mask_to_rgb(
    self,
    image: np.ndarray,
    palette: Optional[List[Tuple[int, int]]] = None,
    data_format: Optional[Union[str, ChannelDimension]] = None,
) -> np.ndarray:
    """Converts a segmentation map to RGB format.

    Args:
        image (`np.ndarray`):
            Segmentation map with dimensions (height, width) where pixel values represent the class index.
        palette (`List[Tuple[int, int]]`, *optional*, defaults to `None`):
            Palette to use to convert the mask to RGB format. If unset, the mask is duplicated across the channel
            dimension.
        data_format (`ChannelDimension` or `str`, *optional*):
            The channel dimension format for the output image. If unset, the channel dimension format of the input
            image is used. Can be one of:

            - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
            - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.

    Returns:
        `np.ndarray`: The mask in RGB format.
    """
    return mask_to_rgb(image, palette=palette, data_format=data_format)

mindnlp.transformers.models.seggpt.image_processing_seggpt.SegGptImageProcessor.post_process_semantic_segmentation(outputs, target_sizes=None, num_labels=None)

Converts the output of [SegGptImageSegmentationOutput] into segmentation maps. Only supports PyTorch.

PARAMETER DESCRIPTION
outputs

Raw outputs of the model.

TYPE: [`SegGptImageSegmentationOutput`]

target_sizes

List of length (batch_size), where each list item (Tuple[int, int]) corresponds to the requested final size (height, width) of each prediction. If left to None, predictions will not be resized.

TYPE: `List[Tuple[int, int]]`, *optional* DEFAULT: None

num_labels

Number of classes in the segmentation task (excluding the background). If specified, a palette will be built, assuming that class_idx 0 is the background, to map prediction masks from RGB values to class indices. This value should be the same used when preprocessing inputs.

TYPE: `int`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/seggpt/image_processing_seggpt.py
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
def post_process_semantic_segmentation(
    self, outputs, target_sizes: Optional[List[Tuple[int, int]]] = None, num_labels: Optional[int] = None
):
    """
    Converts the output of [`SegGptImageSegmentationOutput`] into segmentation maps. Only supports
    PyTorch.

    Args:
        outputs ([`SegGptImageSegmentationOutput`]):
            Raw outputs of the model.
        target_sizes (`List[Tuple[int, int]]`, *optional*):
            List of length (batch_size), where each list item (`Tuple[int, int]`) corresponds to the requested
            final size (height, width) of each prediction. If left to None, predictions will not be resized.
        num_labels (`int`, *optional*):
            Number of classes in the segmentation task (excluding the background). If specified, a palette will be
            built, assuming that class_idx 0 is the background, to map prediction masks from RGB values to class
            indices. This value should be the same used when preprocessing inputs.
    Returns:
        semantic_segmentation: `List[torch.Tensor]` of length `batch_size`, where each item is a semantic
            segmentation map of shape (height, width) corresponding to the target_sizes entry (if `target_sizes` is
            specified). Each entry of each `torch.Tensor` correspond to a semantic class id.
    """
    requires_backends(self, ["mindspore"])
    # batch_size x num_channels x 2*height x width
    masks = outputs.pred_masks

    # Predicted mask and prompt are concatenated in the height dimension
    # batch_size x num_channels x height x width
    masks = masks[:, :, masks.shape[2] // 2:, :]

    # To unnormalize we need to permute to channel last
    # batch_size x height x width x num_channels
    std = ms.Tensor(self.image_std)
    mean = ms.Tensor(self.image_mean)

    masks = masks.permute(0, 2, 3, 1) * std + mean

    # batch_size x num_channels x height x width
    masks = masks.permute(0, 3, 1, 2)

    # Clip to match with palette if specified
    masks = ops.clip(masks * 255, 0, 255)

    semantic_segmentation = []
    palette_tensor = None
    palette = self.get_palette(
        num_labels) if num_labels is not None else None
    if palette is not None:
        palette_tensor = ms.Tensor(palette).float()
        _, num_channels, _, _ = masks.shape
        palette_tensor = palette_tensor.view(
            1, 1, num_labels + 1, num_channels)

    for idx, mask in enumerate(masks):
        if target_sizes is not None:
            mask = ops.interpolate(
                mask.unsqueeze(0),
                size=target_sizes[idx],
                mode="nearest",
            )[0]

        if num_labels is not None:
            channels, height, width = mask.shape
            dist = mask.permute(1, 2, 0).view(height, width, 1, channels)
            dist = dist - palette_tensor
            dist = ops.pow(dist, 2)
            dist = ops.sum(dist, dim=-1)
            pred = dist.argmin(axis=-1)

        else:
            # If no palette is specified SegGpt will try to paint using the mask class idx as RGB
            pred = mask.mean(axis=0).int()

        semantic_segmentation.append(pred)

    return semantic_segmentation

mindnlp.transformers.models.seggpt.image_processing_seggpt.SegGptImageProcessor.preprocess(images=None, prompt_images=None, prompt_masks=None, do_resize=None, size=None, resample=None, do_rescale=None, rescale_factor=None, do_normalize=None, image_mean=None, image_std=None, do_convert_rgb=None, num_labels=None, return_tensors=None, data_format=ChannelDimension.FIRST, input_data_format=None, **kwargs)

Preprocess an image or batch of images.

PARAMETER DESCRIPTION
images

Image to _preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False.

TYPE: `ImageInput` DEFAULT: None

prompt_images

Prompt image to _preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False.

TYPE: `ImageInput` DEFAULT: None

prompt_masks

Prompt mask from prompt image to _preprocess that specify prompt_masks value in the preprocessed output. Can either be in the format of segmentation maps (no channels) or RGB images.

  • If in the format of RGB images, do_convert_rgb should be set to False.
  • If in the format of segmentation maps, num_labels specifying num_labels is recommended to build a palette to map the prompt mask from a single channel to a 3 channel RGB.
  • If num_labels is not specified, the prompt mask will be duplicated across the channel dimension.

TYPE: `ImageInput` DEFAULT: None

do_resize

Whether to resize the image.

TYPE: `bool`, *optional*, defaults to `self.do_resize` DEFAULT: None

size

Dictionary in the format {"height": h, "width": w} specifying the size of the output image after resizing.

TYPE: `Dict[str, int]`, *optional*, defaults to `self.size` DEFAULT: None

resample

PILImageResampling filter to use if resizing the image e.g. PILImageResampling.BICUBIC. Only has an effect if do_resize is set to True. Doesn't apply to prompt mask as it is resized using nearest.

TYPE: `PILImageResampling` filter, *optional*, defaults to `self.resample` DEFAULT: None

do_rescale

Whether to rescale the image values between [0 - 1].

TYPE: `bool`, *optional*, defaults to `self.do_rescale` DEFAULT: None

rescale_factor

Rescale factor to rescale the image by if do_rescale is set to True.

TYPE: `float`, *optional*, defaults to `self.rescale_factor` DEFAULT: None

do_normalize

Whether to normalize the image.

TYPE: `bool`, *optional*, defaults to `self.do_normalize` DEFAULT: None

image_mean

Image mean to use if do_normalize is set to True.

TYPE: `float` or `List[float]`, *optional*, defaults to `self.image_mean` DEFAULT: None

image_std

Image standard deviation to use if do_normalize is set to True.

TYPE: `float` or `List[float]`, *optional*, defaults to `self.image_std` DEFAULT: None

do_convert_rgb

Whether to convert the prompt mask to RGB format. If num_labels is specified, a palette will be built to map the prompt mask from a single channel to a 3 channel RGB. If unset, the prompt mask is duplicated across the channel dimension. Must be set to False if the prompt mask is already in RGB format.

TYPE: `bool`, *optional*, defaults to `self.do_convert_rgb` DEFAULT: None

num_labels

(int, optional): Number of classes in the segmentation task (excluding the background). If specified, a palette will be built, assuming that class_idx 0 is the background, to map the prompt mask from a plain segmentation map with no channels to a 3 channel RGB. Not specifying this will result in the prompt mask either being passed through as is if it is already in RGB format (if do_convert_rgb is false) or being duplicated across the channel dimension.

TYPE: Optional[int] DEFAULT: None

return_tensors

The type of tensors to return. Can be one of:

  • Unset: Return a list of np.ndarray.
  • TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
  • TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
  • TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
  • TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.

TYPE: `str` or `TensorType`, *optional* DEFAULT: None

data_format

The channel dimension format for the output image. Can be one of:

  • "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
  • "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
  • Unset: Use the channel dimension format of the input image.

TYPE: `ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST` DEFAULT: FIRST

input_data_format

The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:

  • "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
  • "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
  • "none" or ChannelDimension.NONE: image in (height, width) format.

TYPE: `ChannelDimension` or `str`, *optional* DEFAULT: None

Source code in mindnlp/transformers/models/seggpt/image_processing_seggpt.py
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
def preprocess(
    self,
    images: Optional[ImageInput] = None,
    prompt_images: Optional[ImageInput] = None,
    prompt_masks: Optional[ImageInput] = None,
    do_resize: Optional[bool] = None,
    size: Dict[str, int] = None,
    resample: PILImageResampling = None,
    do_rescale: Optional[bool] = None,
    rescale_factor: Optional[float] = None,
    do_normalize: Optional[bool] = None,
    image_mean: Optional[Union[float, List[float]]] = None,
    image_std: Optional[Union[float, List[float]]] = None,
    do_convert_rgb: Optional[bool] = None,
    num_labels: Optional[int] = None,
    return_tensors: Optional[Union[str, TensorType]] = None,
    data_format: Union[str, ChannelDimension] = ChannelDimension.FIRST,
    input_data_format: Optional[Union[str, ChannelDimension]] = None,
    **kwargs,
):
    """
    Preprocess an image or batch of images.

    Args:
        images (`ImageInput`):
            Image to _preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
            passing in images with pixel values between 0 and 1, set `do_rescale=False`.
        prompt_images (`ImageInput`):
            Prompt image to _preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
            passing in images with pixel values between 0 and 1, set `do_rescale=False`.
        prompt_masks (`ImageInput`):
            Prompt mask from prompt image to _preprocess that specify prompt_masks value in the preprocessed output.
            Can either be in the format of segmentation maps (no channels) or RGB images.

            - If in the format of RGB images, `do_convert_rgb` should be set to `False`.
            - If in the format of segmentation maps, `num_labels` specifying `num_labels` is recommended to build a
            palette to map the prompt mask from a single channel to a 3 channel RGB.
            - If `num_labels` is not specified, the prompt mask will be duplicated across the channel dimension.
        do_resize (`bool`, *optional*, defaults to `self.do_resize`):
            Whether to resize the image.
        size (`Dict[str, int]`, *optional*, defaults to `self.size`):
            Dictionary in the format `{"height": h, "width": w}` specifying the size of the output image after
            resizing.
        resample (`PILImageResampling` filter, *optional*, defaults to `self.resample`):
            `PILImageResampling` filter to use if resizing the image e.g. `PILImageResampling.BICUBIC`. Only has
            an effect if `do_resize` is set to `True`. Doesn't apply to prompt mask as it is resized using nearest.
        do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
            Whether to rescale the image values between [0 - 1].
        rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`):
            Rescale factor to rescale the image by if `do_rescale` is set to `True`.
        do_normalize (`bool`, *optional*, defaults to `self.do_normalize`):
            Whether to normalize the image.
        image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
            Image mean to use if `do_normalize` is set to `True`.
        image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
            Image standard deviation to use if `do_normalize` is set to `True`.
        do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
            Whether to convert the prompt mask to RGB format. If `num_labels` is specified, a palette will be built
            to map the prompt mask from a single channel to a 3 channel RGB. If unset, the prompt mask is duplicated
            across the channel dimension. Must be set to `False` if the prompt mask is already in RGB format.
        num_labels: (`int`, *optional*):
            Number of classes in the segmentation task (excluding the background). If specified, a palette will be
            built, assuming that class_idx 0 is the background, to map the prompt mask from a plain segmentation map
            with no channels to a 3 channel RGB. Not specifying this will result in the prompt mask either being passed
            through as is if it is already in RGB format (if `do_convert_rgb` is false) or being duplicated
            across the channel dimension.
        return_tensors (`str` or `TensorType`, *optional*):
            The type of tensors to return. Can be one of:

            - Unset: Return a list of `np.ndarray`.
            - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
            - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
            - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
            - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
        data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`):
            The channel dimension format for the output image. Can be one of:

            - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
            - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
            - Unset: Use the channel dimension format of the input image.
        input_data_format (`ChannelDimension` or `str`, *optional*):
            The channel dimension format for the input image. If unset, the channel dimension format is inferred
            from the input image. Can be one of:

            - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
            - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
            - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
    """
    if all(v is None for v in [images, prompt_images, prompt_masks]):
        raise ValueError(
            "At least one of images, prompt_images, prompt_masks must be specified.")

    data = {}

    if images is not None:
        images = self._preprocess_step(
            images,
            is_mask=False,
            do_resize=do_resize,
            size=size,
            resample=resample,
            do_rescale=do_rescale,
            rescale_factor=rescale_factor,
            do_normalize=do_normalize,
            image_mean=image_mean,
            image_std=image_std,
            do_convert_rgb=False,
            data_format=data_format,
            input_data_format=input_data_format,
            **kwargs,
        )

        data["pixel_values"] = images

    if prompt_images is not None:
        prompt_images = self._preprocess_step(
            prompt_images,
            is_mask=False,
            do_resize=do_resize,
            size=size,
            resample=resample,
            do_rescale=do_rescale,
            rescale_factor=rescale_factor,
            do_normalize=do_normalize,
            image_mean=image_mean,
            image_std=image_std,
            do_convert_rgb=False,
            data_format=data_format,
            input_data_format=input_data_format,
            **kwargs,
        )

        data["prompt_pixel_values"] = prompt_images

    if prompt_masks is not None:
        prompt_masks = self._preprocess_step(
            prompt_masks,
            do_resize=do_resize,
            size=size,
            resample=PILImageResampling.NEAREST,
            do_rescale=do_rescale,
            rescale_factor=rescale_factor,
            do_normalize=do_normalize,
            image_mean=image_mean,
            image_std=image_std,
            do_convert_rgb=do_convert_rgb,
            num_labels=num_labels,
            data_format=data_format,
            input_data_format=input_data_format,
            **kwargs,
        )

        data["prompt_masks"] = prompt_masks

    return BatchFeature(data=data, tensor_type=return_tensors)

mindnlp.transformers.models.seggpt.image_processing_seggpt.SegGptImageProcessor.resize(image, size, resample=PILImageResampling.BICUBIC, data_format=None, input_data_format=None, **kwargs)

Resize an image to (size["height"], size["width"]).

PARAMETER DESCRIPTION
image

Image to resize.

TYPE: `np.ndarray`

size

Dictionary in the format {"height": int, "width": int} specifying the size of the output image.

TYPE: `Dict[str, int]`

resample

PILImageResampling filter to use when resizing the image e.g. PILImageResampling.BICUBIC.

TYPE: `PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC` DEFAULT: BICUBIC

data_format

The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. Can be one of:

  • "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
  • "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
  • "none" or ChannelDimension.NONE: image in (height, width) format.

TYPE: `ChannelDimension` or `str`, *optional* DEFAULT: None

input_data_format

The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:

  • "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
  • "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
  • "none" or ChannelDimension.NONE: image in (height, width) format.

TYPE: `ChannelDimension` or `str`, *optional* DEFAULT: None

RETURNS DESCRIPTION
ndarray

np.ndarray: The resized image.

Source code in mindnlp/transformers/models/seggpt/image_processing_seggpt.py
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
def resize(
    self,
    image: np.ndarray,
    size: Dict[str, int],
    resample: PILImageResampling = PILImageResampling.BICUBIC,
    data_format: Optional[Union[str, ChannelDimension]] = None,
    input_data_format: Optional[Union[str, ChannelDimension]] = None,
    **kwargs,
) -> np.ndarray:
    """
    Resize an image to `(size["height"], size["width"])`.

    Args:
        image (`np.ndarray`):
            Image to resize.
        size (`Dict[str, int]`):
            Dictionary in the format `{"height": int, "width": int}` specifying the size of the output image.
        resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`):
            `PILImageResampling` filter to use when resizing the image e.g. `PILImageResampling.BICUBIC`.
        data_format (`ChannelDimension` or `str`, *optional*):
            The channel dimension format for the output image. If unset, the channel dimension format of the input
            image is used. Can be one of:

            - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
            - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
            - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
        input_data_format (`ChannelDimension` or `str`, *optional*):
            The channel dimension format for the input image. If unset, the channel dimension format is inferred
            from the input image. Can be one of:

            - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
            - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
            - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.

    Returns:
        `np.ndarray`: The resized image.
    """
    size = get_size_dict(size)
    if "height" not in size or "width" not in size:
        raise ValueError(
            f"The `size` dictionary must contain the keys `height` and `width`. Got {size.keys()}")
    output_size = (size["height"], size["width"])
    return resize(
        image,
        size=output_size,
        resample=resample,
        data_format=data_format,
        input_data_format=input_data_format,
        **kwargs,
    )

mindnlp.transformers.models.seggpt.modeling_seggpt

MindSpore SegGpt model.

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptAttention

Bases: Module

Multi-head Attention block with relative position embeddings.

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
class SegGptAttention(nn.Module):
    """Multi-head Attention block with relative position embeddings."""

    def __init__(self, config):
        super().__init__()
        image_size, patch_size = config.image_size, config.patch_size
        image_size = image_size if isinstance(
            image_size, collections.abc.Iterable) else (image_size, image_size)
        patch_size = patch_size if isinstance(
            patch_size, collections.abc.Iterable) else (patch_size, patch_size)

        input_size = (image_size[0] // config.patch_size,
                      image_size[1] // config.patch_size)
        head_dim = config.hidden_size // config.num_attention_heads

        self.num_attention_heads = config.num_attention_heads
        self.scale = head_dim**-0.5

        self.qkv = nn.Linear(config.hidden_size,
                            config.hidden_size * 3, bias=config.qkv_bias)
        self.proj = nn.Linear(config.hidden_size, config.hidden_size)

        self.use_relative_position_embeddings = config.use_relative_position_embeddings
        if self.use_relative_position_embeddings:
            if input_size is None:
                raise ValueError(
                    "Input size must be provided if using relative positional encoding.")

            # initialize relative positional embeddings
            self.rel_pos_h = ms.Parameter(
                ops.zeros(2 * input_size[0] - 1, head_dim))
            self.rel_pos_w = ms.Parameter(
                ops.zeros(2 * input_size[1] - 1, head_dim))

    def get_rel_pos(self, q_size: int, k_size: int, rel_pos: ms.Tensor) -> ms.Tensor:
        """
        Get relative positional embeddings according to the relative positions of
            query and key sizes.

        Args:
            q_size (int):
                size of the query.
            k_size (int):
                size of key k.
            rel_pos (`ms.Tensor`):
                relative position embeddings (L, channel).

        Returns:
            Extracted positional embeddings according to relative positions.
        """
        max_rel_dist = int(2 * max(q_size, k_size) - 1)
        # Interpolate rel pos.
        rel_pos_resized = ops.interpolate(
            rel_pos.reshape(1, rel_pos.shape[0], -1).permute(0, 2, 1),
            size=max_rel_dist,
            mode="linear",
        )
        rel_pos_resized = rel_pos_resized.reshape(
            -1, max_rel_dist).permute(1, 0)

        # Scale the coords with short length if shapes for q and k are different.
        q_coords = ops.arange(q_size)[:, None] * max(k_size / q_size, 1.0)
        k_coords = ops.arange(k_size)[None, :] * max(q_size / k_size, 1.0)
        relative_coords = (q_coords - k_coords) + \
            (k_size - 1) * max(q_size / k_size, 1.0)

        return rel_pos_resized[relative_coords.long()]

    def add_decomposed_rel_pos(
        self,
        attn: ms.Tensor,
        query: ms.Tensor,
        rel_pos_h: ms.Tensor,
        rel_pos_w: ms.Tensor,
        q_size: Tuple[int, int],
        k_size: Tuple[int, int],
    ) -> ms.Tensor:
        """
        Calculate decomposed Relative Positional Embeddings from :paper:`mvitv2`.
        https://github.com/facebookresearch/mvit/blob/19786631e330df9f3622e5402b4a419a263a2c80/mvit/models/attention.py

        Args:
            attn (`ms.Tensor`):
                attention map.
            query (`ms.Tensor`):
                query q in the attention layer with shape (batch_size, query_height * query_width, channel).
            rel_pos_h (`ms.Tensor`):
                relative position embeddings (Lh, channel) for height axis.
            rel_pos_w (`ms.Tensor`):
                relative position embeddings (Lw, channel) for width axis.
            q_size (tuple):
                spatial sequence size of query q with (query_height, query_width).
            k_size (tuple):
                spatial sequence size of key k with (key_height, key_width).

        Returns:
            attn (`ms.Tensor`):
                attention map with added relative positional embeddings.
        """
        query_height, query_width = q_size
        key_height, key_width = k_size
        relative_position_height = self.get_rel_pos(
            query_height, key_height, rel_pos_h)
        relative_position_width = self.get_rel_pos(
            query_width, key_width, rel_pos_w)

        batch_size, _, dim = query.shape
        reshaped_query = query.reshape(
            batch_size, query_height, query_width, dim)
        rel_h = ops.einsum("bhwc,hkc->bhwk", reshaped_query,
                           relative_position_height)
        rel_w = ops.einsum("bhwc,wkc->bhwk", reshaped_query,
                           relative_position_width)
        attn = attn.reshape(batch_size, query_height,
                            query_width, key_height, key_width)
        attn = attn + rel_h[:, :, :, :, None] + rel_w[:, :, :, None, :]
        attn = attn.reshape(batch_size, query_height *
                            query_width, key_height * key_width)
        return attn

    def forward(self, hidden_states: ms.Tensor, output_attentions=False) -> ms.Tensor:
        batch_size, height, width, _ = hidden_states.shape
        # qkv with shape (3, batch_size, nHead, height * width, channel)
        qkv = (
            self.qkv(hidden_states)
            .reshape(batch_size, height * width, 3, self.num_attention_heads, -1)
            .permute(2, 0, 3, 1, 4)
        )
        # q, k, v with shape (batch_size * nHead, height * width, channel)
        query, key, value = qkv.reshape(
            3, batch_size * self.num_attention_heads, height * width, -1).unbind(0)

        attn_weights = (query * self.scale) @ key.swapaxes(-2, -1)

        if self.use_relative_position_embeddings:
            attn_weights = self.add_decomposed_rel_pos(
                attn_weights, query, self.rel_pos_h, self.rel_pos_w, (
                    height, width), (height, width)
            )

        attn_weights = ops.softmax(
            attn_weights, dtype=ms.float32, axis=-1).astype(query.dtype)

        if output_attentions:
            # this operation is a bit awkward, but it's required to
            # make sure that attn_weights keeps its gradient.
            # In order to do so, attn_weights have to reshaped
            # twice and have to be reused in the following
            attn_weights_reshaped = attn_weights.view(
                batch_size, self.num_attention_heads, height * width, -1)
            attn_weights = attn_weights_reshaped.view(
                batch_size * self.num_attention_heads, height * width, -1)
        else:
            attn_weights_reshaped = None

        attn_output = (attn_weights @ value).reshape(batch_size,
                                                     self.num_attention_heads, height, width, -1)
        attn_output = attn_output.permute(
            0, 2, 3, 1, 4).reshape(batch_size, height, width, -1)

        attn_output = self.proj(attn_output)

        return (attn_output, attn_weights_reshaped)

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptAttention.add_decomposed_rel_pos(attn, query, rel_pos_h, rel_pos_w, q_size, k_size)

Calculate decomposed Relative Positional Embeddings from :paper:mvitv2. https://github.com/facebookresearch/mvit/blob/19786631e330df9f3622e5402b4a419a263a2c80/mvit/models/attention.py

PARAMETER DESCRIPTION
attn

attention map.

TYPE: `ms.Tensor`

query

query q in the attention layer with shape (batch_size, query_height * query_width, channel).

TYPE: `ms.Tensor`

rel_pos_h

relative position embeddings (Lh, channel) for height axis.

TYPE: `ms.Tensor`

rel_pos_w

relative position embeddings (Lw, channel) for width axis.

TYPE: `ms.Tensor`

q_size

spatial sequence size of query q with (query_height, query_width).

TYPE: tuple

k_size

spatial sequence size of key k with (key_height, key_width).

TYPE: tuple

RETURNS DESCRIPTION
attn

attention map with added relative positional embeddings.

TYPE: `ms.Tensor`

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
def add_decomposed_rel_pos(
    self,
    attn: ms.Tensor,
    query: ms.Tensor,
    rel_pos_h: ms.Tensor,
    rel_pos_w: ms.Tensor,
    q_size: Tuple[int, int],
    k_size: Tuple[int, int],
) -> ms.Tensor:
    """
    Calculate decomposed Relative Positional Embeddings from :paper:`mvitv2`.
    https://github.com/facebookresearch/mvit/blob/19786631e330df9f3622e5402b4a419a263a2c80/mvit/models/attention.py

    Args:
        attn (`ms.Tensor`):
            attention map.
        query (`ms.Tensor`):
            query q in the attention layer with shape (batch_size, query_height * query_width, channel).
        rel_pos_h (`ms.Tensor`):
            relative position embeddings (Lh, channel) for height axis.
        rel_pos_w (`ms.Tensor`):
            relative position embeddings (Lw, channel) for width axis.
        q_size (tuple):
            spatial sequence size of query q with (query_height, query_width).
        k_size (tuple):
            spatial sequence size of key k with (key_height, key_width).

    Returns:
        attn (`ms.Tensor`):
            attention map with added relative positional embeddings.
    """
    query_height, query_width = q_size
    key_height, key_width = k_size
    relative_position_height = self.get_rel_pos(
        query_height, key_height, rel_pos_h)
    relative_position_width = self.get_rel_pos(
        query_width, key_width, rel_pos_w)

    batch_size, _, dim = query.shape
    reshaped_query = query.reshape(
        batch_size, query_height, query_width, dim)
    rel_h = ops.einsum("bhwc,hkc->bhwk", reshaped_query,
                       relative_position_height)
    rel_w = ops.einsum("bhwc,wkc->bhwk", reshaped_query,
                       relative_position_width)
    attn = attn.reshape(batch_size, query_height,
                        query_width, key_height, key_width)
    attn = attn + rel_h[:, :, :, :, None] + rel_w[:, :, :, None, :]
    attn = attn.reshape(batch_size, query_height *
                        query_width, key_height * key_width)
    return attn

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptAttention.get_rel_pos(q_size, k_size, rel_pos)

Get relative positional embeddings according to the relative positions of query and key sizes.

PARAMETER DESCRIPTION
q_size

size of the query.

TYPE: int

k_size

size of key k.

TYPE: int

rel_pos

relative position embeddings (L, channel).

TYPE: `ms.Tensor`

RETURNS DESCRIPTION
Tensor

Extracted positional embeddings according to relative positions.

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
def get_rel_pos(self, q_size: int, k_size: int, rel_pos: ms.Tensor) -> ms.Tensor:
    """
    Get relative positional embeddings according to the relative positions of
        query and key sizes.

    Args:
        q_size (int):
            size of the query.
        k_size (int):
            size of key k.
        rel_pos (`ms.Tensor`):
            relative position embeddings (L, channel).

    Returns:
        Extracted positional embeddings according to relative positions.
    """
    max_rel_dist = int(2 * max(q_size, k_size) - 1)
    # Interpolate rel pos.
    rel_pos_resized = ops.interpolate(
        rel_pos.reshape(1, rel_pos.shape[0], -1).permute(0, 2, 1),
        size=max_rel_dist,
        mode="linear",
    )
    rel_pos_resized = rel_pos_resized.reshape(
        -1, max_rel_dist).permute(1, 0)

    # Scale the coords with short length if shapes for q and k are different.
    q_coords = ops.arange(q_size)[:, None] * max(k_size / q_size, 1.0)
    k_coords = ops.arange(k_size)[None, :] * max(q_size / k_size, 1.0)
    relative_coords = (q_coords - k_coords) + \
        (k_size - 1) * max(q_size / k_size, 1.0)

    return rel_pos_resized[relative_coords.long()]

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptDropPath

Bases: Module

Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
431
432
433
434
435
436
437
438
439
440
441
442
class SegGptDropPath(nn.Module):
    """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks)."""

    def __init__(self, drop_prob: Optional[float] = None) -> None:
        super().__init__()
        self.drop_prob = drop_prob

    def forward(self, hidden_states: ms.Tensor) -> ms.Tensor:
        return drop_path(hidden_states, self.drop_prob, self.training)

    def extra_repr(self) -> str:
        return "p={}".format(self.drop_prob)

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptEmbeddings

Bases: Module

Construct the embeddings from patch, position embeddings for input and prompt.

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
class SegGptEmbeddings(nn.Module):
    """
    Construct the embeddings from patch, position embeddings for input and prompt.
    """

    def __init__(self, config: SegGptConfig) -> None:
        super().__init__()

        self.mask_token = ms.Parameter(ops.zeros(1, 1, 1, config.hidden_size))
        self.segment_token_input = ms.Parameter(
            ops.zeros(1, 1, 1, config.hidden_size))
        self.segment_token_prompt = ms.Parameter(
            ops.zeros(1, 1, 1, config.hidden_size))
        # token for seg types
        self.type_token_semantic = ms.Parameter(
            ops.zeros(1, 1, 1, config.hidden_size))
        self.type_token_instance = ms.Parameter(
            ops.zeros(1, 1, 1, config.hidden_size))

        self.patch_embeddings = SegGptPatchEmbeddings(config)

        num_positions = (config.pretrain_image_size //
                         config.patch_size) ** 2 + 1
        self.position_embeddings = ms.Parameter(
            ops.randn(1, num_positions, config.hidden_size))
        self.dropout = nn.Dropout(config.hidden_dropout_prob)

    def interpolate_pos_encoding(self, height: int, width: int) -> ms.Tensor:
        patch_pos_embed = self.position_embeddings[:, 1:]
        num_patches = patch_pos_embed.shape[1]
        pretrain_patch_size = int(math.sqrt(num_patches))

        if pretrain_patch_size != height or pretrain_patch_size != width:
            patch_pos_embed = ops.interpolate(
                patch_pos_embed.reshape(
                    1, pretrain_patch_size, pretrain_patch_size, -1).permute(0, 3, 1, 2),
                size=(height, width),
                mode="bicubic",
                align_corners=False,
            )

            return patch_pos_embed.permute(0, 2, 3, 1)
        else:
            return patch_pos_embed.reshape(1, height, width, -1)

    def forward(
        self,
        pixel_values: ms.Tensor,
        prompt_pixel_values: ms.Tensor,
        bool_masked_pos: Optional[ms.Tensor] = None,
        embedding_type: Optional[str] = None,
    ) -> ms.Tensor:
        input_embeddings = self.patch_embeddings(pixel_values)
        prompt_embeddings = self.patch_embeddings(prompt_pixel_values)

        batch_size, patch_height, patch_width, _ = input_embeddings.shape

        mask_token = self.mask_token.expand(
            batch_size, patch_height, patch_width, -1)
        # replace the masked visual tokens by mask_token
        w = bool_masked_pos.unsqueeze(-1).type_as(
            mask_token).reshape(-1, patch_height, patch_width, 1)
        prompt_embeddings = prompt_embeddings * (1 - w) + mask_token * w

        embedding_type = embedding_type if embedding_type is not None else "instance"

        # add positional encoding to each token
        pos_embed = self.interpolate_pos_encoding(patch_height, patch_width)

        # add segment token
        input_embeddings = input_embeddings + self.segment_token_input
        prompt_embeddings = prompt_embeddings + self.segment_token_prompt

        # add position embedding skipping CLS
        input_embeddings = input_embeddings + pos_embed
        prompt_embeddings = prompt_embeddings + pos_embed

        # add type embedding to each token
        if embedding_type == "semantic":
            type_embedding = self.type_token_semantic
        elif embedding_type == "instance":
            type_embedding = self.type_token_instance
        else:
            raise ValueError(
                f"Embedding type should be either 'semantic' or 'instance', but got {embedding_type}")

        input_embeddings = input_embeddings + type_embedding
        prompt_embeddings = prompt_embeddings + type_embedding

        embeddings = ops.cat((input_embeddings, prompt_embeddings), axis=0)

        return embeddings

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptEncoderOutput dataclass

Bases: ModelOutput

Output type of [SegGptEncoderOutput].

PARAMETER DESCRIPTION
last_hidden_state

Sequence of hidden-states at the output of the last layer of the model.

TYPE: `ms.Tensor` of shape `(batch_size, patch_height, patch_width, hidden_size)`

hidden_states

Tuple of ms.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, patch_height, patch_width, hidden_size).

TYPE: `Tuple[ms.Tensor]`, `optional`, returned when `config.output_hidden_states=True` DEFAULT: None

attentions

Tuple of ms.Tensor (one for each layer) of shape (batch_size, num_heads, seq_len, seq_len).

TYPE: `Tuple[ms.Tensor]`, `optional`, returned when `config.output_attentions=True` DEFAULT: None

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
@dataclass
class SegGptEncoderOutput(ModelOutput):
    """
    Output type of [`SegGptEncoderOutput`].

    Args:
        last_hidden_state (`ms.Tensor` of shape `(batch_size, patch_height, patch_width, hidden_size)`):
            Sequence of hidden-states at the output of the last layer of the model.
        hidden_states (`Tuple[ms.Tensor]`, `optional`, returned when `config.output_hidden_states=True`):
            Tuple of `ms.Tensor` (one for the output of the embeddings + one for the output of each layer)
            of shape `(batch_size, patch_height, patch_width, hidden_size)`.
        attentions (`Tuple[ms.Tensor]`, `optional`, returned when `config.output_attentions=True`):
            Tuple of *ms.Tensor* (one for each layer) of shape
            `(batch_size, num_heads, seq_len, seq_len)`.
        intermediate_hidden_states (`Tuple[ms.Tensor]`, `optional`, returned when
            `config.intermediate_hidden_state_indices` is set):
            Tuple of `ms.Tensor` of shape `(batch_size, patch_height, patch_width, hidden_size)`.
            Each element in the Tuple corresponds to the output of the layer specified in
            `config.intermediate_hidden_state_indices`. Additionaly, each feature passes through a LayerNorm.
    """

    last_hidden_state: ms.Tensor
    hidden_states: Optional[Tuple[ms.Tensor]] = None
    attentions: Optional[Tuple[ms.Tensor]] = None
    intermediate_hidden_states: Optional[Tuple[ms.Tensor]] = None

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptForImageSegmentation

Bases: SegGptPreTrainedModel

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
class SegGptForImageSegmentation(SegGptPreTrainedModel):
    def __init__(self, config: SegGptConfig):
        super().__init__(config)
        self.config = config

        self.model = SegGptModel(config)
        self.decoder = SegGptDecoder(config)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        pixel_values: ms.Tensor,
        prompt_pixel_values: ms.Tensor,
        prompt_masks: ms.Tensor,
        bool_masked_pos: Optional[ms.Tensor] = None,
        feature_ensemble: Optional[bool] = None,
        embedding_type: Optional[str] = None,
        labels: Optional[ms.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple, SegGptImageSegmentationOutput]:
        r"""
        Args:
            labels (`ms.Tensor` of shape `(batch_size, num_channels, height, width)`, `optional`):
                Ground truth mask for input images.

        Returns:
            `Union[Tuple, SegGptImageSegmentationOutput]`

        Example:
            ```python
            >>> from transformers import SegGptImageProcessor, SegGptForImageSegmentation
            >>> from PIL import Image
            >>> import requests
            ...
            >>> image_input_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_2.jpg"
            >>> image_prompt_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1.jpg"
            >>> mask_prompt_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1_target.png"
            ...
            >>> image_input = Image.open(requests.get(image_input_url, stream=True).raw)
            >>> image_prompt = Image.open(requests.get(image_prompt_url, stream=True).raw)
            >>> mask_prompt = Image.open(requests.get(mask_prompt_url, stream=True).raw).convert("L")
            ...
            >>> checkpoint = "BAAI/seggpt-vit-large"
            >>> model = SegGptForImageSegmentation.from_pretrained(checkpoint)
            >>> image_processor = SegGptImageProcessor.from_pretrained(checkpoint)
            ...
            >>> inputs = image_processor(images=image_input, prompt_images=image_prompt, prompt_masks=mask_prompt, return_tensors="pt")
            >>> outputs = model(**inputs)
            >>> result = image_processor.post_process_semantic_segmentation(outputs, target_sizes=[image_input.size[::-1]])[0]
            >>> print(list(result.shape))
            [170, 297]
            ```
        """
        output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
        output_hidden_states = (
            output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
        )
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        if bool_masked_pos is None:
            num_patches = self.model.embeddings.patch_embeddings.num_patches
            bool_masked_pos = ops.zeros(num_patches, dtype=ms.bool_)
            bool_masked_pos[num_patches // 2:] = 1
            bool_masked_pos = bool_masked_pos.unsqueeze(0)

        outputs = self.model(
            pixel_values=pixel_values,
            prompt_pixel_values=prompt_pixel_values,
            prompt_masks=prompt_masks,
            bool_masked_pos=bool_masked_pos,
            feature_ensemble=feature_ensemble,
            embedding_type=embedding_type,
            labels=labels,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        intermediate_hidden_states = outputs.intermediate_hidden_states if return_dict else outputs[-1]
        intermediate_hidden_states = ops.cat(
            intermediate_hidden_states, axis=-1)
        pred_masks = self.decoder(intermediate_hidden_states)

        loss = None
        if labels is not None:
            loss_fn = SegGptLoss(self.config)
            loss = loss_fn(prompt_masks, pred_masks, labels, bool_masked_pos)

        if not return_dict:
            output = (pred_masks,)
            if output_hidden_states:
                output = output + (outputs[1],)

            if output_attentions:
                idx = 2 if output_hidden_states else 1
                output = output + (outputs[idx],)

            if loss is not None:
                output = (loss,) + output
            return output

        return SegGptImageSegmentationOutput(
            loss=loss,
            pred_masks=pred_masks,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptForImageSegmentation.forward(pixel_values, prompt_pixel_values, prompt_masks, bool_masked_pos=None, feature_ensemble=None, embedding_type=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
labels

Ground truth mask for input images.

TYPE: `ms.Tensor` of shape `(batch_size, num_channels, height, width)`, `optional` DEFAULT: None

RETURNS DESCRIPTION
Union[Tuple, SegGptImageSegmentationOutput]

Union[Tuple, SegGptImageSegmentationOutput]

Example
>>> from transformers import SegGptImageProcessor, SegGptForImageSegmentation
>>> from PIL import Image
>>> import requests
...
>>> image_input_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_2.jpg"
>>> image_prompt_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1.jpg"
>>> mask_prompt_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1_target.png"
...
>>> image_input = Image.open(requests.get(image_input_url, stream=True).raw)
>>> image_prompt = Image.open(requests.get(image_prompt_url, stream=True).raw)
>>> mask_prompt = Image.open(requests.get(mask_prompt_url, stream=True).raw).convert("L")
...
>>> checkpoint = "BAAI/seggpt-vit-large"
>>> model = SegGptForImageSegmentation.from_pretrained(checkpoint)
>>> image_processor = SegGptImageProcessor.from_pretrained(checkpoint)
...
>>> inputs = image_processor(images=image_input, prompt_images=image_prompt, prompt_masks=mask_prompt, return_tensors="pt")
>>> outputs = model(**inputs)
>>> result = image_processor.post_process_semantic_segmentation(outputs, target_sizes=[image_input.size[::-1]])[0]
>>> print(list(result.shape))
[170, 297]
Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
def forward(
    self,
    pixel_values: ms.Tensor,
    prompt_pixel_values: ms.Tensor,
    prompt_masks: ms.Tensor,
    bool_masked_pos: Optional[ms.Tensor] = None,
    feature_ensemble: Optional[bool] = None,
    embedding_type: Optional[str] = None,
    labels: Optional[ms.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple, SegGptImageSegmentationOutput]:
    r"""
    Args:
        labels (`ms.Tensor` of shape `(batch_size, num_channels, height, width)`, `optional`):
            Ground truth mask for input images.

    Returns:
        `Union[Tuple, SegGptImageSegmentationOutput]`

    Example:
        ```python
        >>> from transformers import SegGptImageProcessor, SegGptForImageSegmentation
        >>> from PIL import Image
        >>> import requests
        ...
        >>> image_input_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_2.jpg"
        >>> image_prompt_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1.jpg"
        >>> mask_prompt_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1_target.png"
        ...
        >>> image_input = Image.open(requests.get(image_input_url, stream=True).raw)
        >>> image_prompt = Image.open(requests.get(image_prompt_url, stream=True).raw)
        >>> mask_prompt = Image.open(requests.get(mask_prompt_url, stream=True).raw).convert("L")
        ...
        >>> checkpoint = "BAAI/seggpt-vit-large"
        >>> model = SegGptForImageSegmentation.from_pretrained(checkpoint)
        >>> image_processor = SegGptImageProcessor.from_pretrained(checkpoint)
        ...
        >>> inputs = image_processor(images=image_input, prompt_images=image_prompt, prompt_masks=mask_prompt, return_tensors="pt")
        >>> outputs = model(**inputs)
        >>> result = image_processor.post_process_semantic_segmentation(outputs, target_sizes=[image_input.size[::-1]])[0]
        >>> print(list(result.shape))
        [170, 297]
        ```
    """
    output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
    output_hidden_states = (
        output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
    )
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    if bool_masked_pos is None:
        num_patches = self.model.embeddings.patch_embeddings.num_patches
        bool_masked_pos = ops.zeros(num_patches, dtype=ms.bool_)
        bool_masked_pos[num_patches // 2:] = 1
        bool_masked_pos = bool_masked_pos.unsqueeze(0)

    outputs = self.model(
        pixel_values=pixel_values,
        prompt_pixel_values=prompt_pixel_values,
        prompt_masks=prompt_masks,
        bool_masked_pos=bool_masked_pos,
        feature_ensemble=feature_ensemble,
        embedding_type=embedding_type,
        labels=labels,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    intermediate_hidden_states = outputs.intermediate_hidden_states if return_dict else outputs[-1]
    intermediate_hidden_states = ops.cat(
        intermediate_hidden_states, axis=-1)
    pred_masks = self.decoder(intermediate_hidden_states)

    loss = None
    if labels is not None:
        loss_fn = SegGptLoss(self.config)
        loss = loss_fn(prompt_masks, pred_masks, labels, bool_masked_pos)

    if not return_dict:
        output = (pred_masks,)
        if output_hidden_states:
            output = output + (outputs[1],)

        if output_attentions:
            idx = 2 if output_hidden_states else 1
            output = output + (outputs[idx],)

        if loss is not None:
            output = (loss,) + output
        return output

    return SegGptImageSegmentationOutput(
        loss=loss,
        pred_masks=pred_masks,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptImageSegmentationOutput dataclass

Bases: ModelOutput

Output type of [SegGptImageSegmentationOutput].

PARAMETER DESCRIPTION
loss

The loss value.

TYPE: `ms.Tensor`, `optional`, returned when `labels` is provided DEFAULT: None

pred_masks

The predicted masks.

TYPE: `ms.Tensor` of shape `(batch_size, num_channels, height, width)` DEFAULT: None

hidden_states

Tuple of ms.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, patch_height, patch_width, hidden_size).

TYPE: `Tuple[ms.Tensor]`, `optional`, returned when `config.output_hidden_states=True` DEFAULT: None

attentions

Tuple of ms.Tensor (one for each layer) of shape (batch_size, num_heads, seq_len, seq_len).

TYPE: `Tuple[ms.Tensor]`, `optional`, returned when `config.output_attentions=True` DEFAULT: None

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
@dataclass
class SegGptImageSegmentationOutput(ModelOutput):
    """
    Output type of [`SegGptImageSegmentationOutput`].

    Args:
        loss (`ms.Tensor`, `optional`, returned when `labels` is provided):
            The loss value.
        pred_masks (`ms.Tensor` of shape `(batch_size, num_channels, height, width)`):
            The predicted masks.
        hidden_states (`Tuple[ms.Tensor]`, `optional`, returned when `config.output_hidden_states=True`):
            Tuple of `ms.Tensor` (one for the output of the embeddings + one for the output of each layer)
            of shape `(batch_size, patch_height, patch_width, hidden_size)`.
        attentions (`Tuple[ms.Tensor]`, `optional`, returned when `config.output_attentions=True`):
            Tuple of `ms.Tensor` (one for each layer) of shape
            `(batch_size, num_heads, seq_len, seq_len)`.
    """

    loss: Optional[ms.Tensor] = None
    pred_masks: Optional[ms.Tensor] = None
    hidden_states: Optional[Tuple[ms.Tensor]] = None
    attentions: Optional[Tuple[ms.Tensor]] = None

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptLayerNorm

Bases: Module

LayerNorm that supports two data formats: channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width).

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
class SegGptLayerNorm(nn.Module):
    r"""
    LayerNorm that supports two data formats: channels_last (default) or channels_first.
    The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height,
    width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width).
    """

    def __init__(self, normalized_shape, eps=1e-6, data_format="channels_last"):
        super().__init__()
        self.weight = ms.Parameter(ops.ones(normalized_shape))
        self.bias = ms.Parameter(ops.zeros(normalized_shape))
        self.eps = eps
        self.data_format = data_format
        if self.data_format not in ["channels_last", "channels_first"]:
            raise NotImplementedError(
                f"Unsupported data format: {self.data_format}")
        self.normalized_shape = (normalized_shape,)

    def forward(self, x: ms.Tensor) -> ms.Tensor:
        if self.data_format == "channels_last":
            x = ops.layer_norm(x, self.normalized_shape,
                               self.weight, self.bias, self.eps)
        elif self.data_format == "channels_first":
            input_dtype = x.dtype
            x = x.float()
            u = x.mean(1, keep_dims=True)
            s = (x - u).pow(2).mean(1, keep_dims=True)
            x = (x - u) / ops.sqrt(s + self.eps)
            x = x.astype(input_dtype)
            x = self.weight[:, None, None] * x + self.bias[:, None, None]
        return x

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptLoss

Bases: Module

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
class SegGptLoss(nn.Module):
    def __init__(self, config):
        super().__init__()
        self.beta = config.beta
        self.patch_size = config.patch_size

    def forward(
        self,
        prompt_masks: ms.Tensor,
        pred_masks: ms.Tensor,
        labels: ms.Tensor,
        bool_masked_pos: ms.Tensor,
    ):
        """Computes the L1 loss between the predicted masks and the ground truth masks.

        Args:
            prompt_masks (`ms.Tensor` of shape `(batch_size, num_channels, height, width)`):
                Pixel values from mask prompt.

            pred_masks (`ms.Tensor` of shape `(batch_size, num_channels, 2*height, width)`):
                Predicted masks.

            labels (`ms.Tensor` of shape `(batch_size, num_channels, height, width)`):
                Ground truth mask for input images.

            bool_masked_pos (`ms.Tensor` of shape `(batch_size, num_patches)`):
                Boolean masked positions. Indicates which patches are masked (1) and which aren't (0).

        Returns:
            `ms.Tensor`: The mean L1 loss between the predicted masks and the ground truth masks.
        """
        ground_truth = ops.cat((prompt_masks, labels), axis=2)

        mask = bool_masked_pos[:, :, None].repeat(1, 1, self.patch_size**2 * 3)
        mask = unpatchify(
            mask, ground_truth.shape[2] // self.patch_size, ground_truth.shape[3] // self.patch_size)

        loss = ops.smooth_l1_loss(
            pred_masks, ground_truth, reduction="none", beta=self.beta)
        loss = (loss * mask).sum() / mask.sum()  # mean loss on removed patches

        return loss

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptLoss.forward(prompt_masks, pred_masks, labels, bool_masked_pos)

Computes the L1 loss between the predicted masks and the ground truth masks.

PARAMETER DESCRIPTION
prompt_masks

Pixel values from mask prompt.

TYPE: `ms.Tensor` of shape `(batch_size, num_channels, height, width)`

pred_masks

Predicted masks.

TYPE: `ms.Tensor` of shape `(batch_size, num_channels, 2*height, width)`

labels

Ground truth mask for input images.

TYPE: `ms.Tensor` of shape `(batch_size, num_channels, height, width)`

bool_masked_pos

Boolean masked positions. Indicates which patches are masked (1) and which aren't (0).

TYPE: `ms.Tensor` of shape `(batch_size, num_patches)`

RETURNS DESCRIPTION

ms.Tensor: The mean L1 loss between the predicted masks and the ground truth masks.

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
def forward(
    self,
    prompt_masks: ms.Tensor,
    pred_masks: ms.Tensor,
    labels: ms.Tensor,
    bool_masked_pos: ms.Tensor,
):
    """Computes the L1 loss between the predicted masks and the ground truth masks.

    Args:
        prompt_masks (`ms.Tensor` of shape `(batch_size, num_channels, height, width)`):
            Pixel values from mask prompt.

        pred_masks (`ms.Tensor` of shape `(batch_size, num_channels, 2*height, width)`):
            Predicted masks.

        labels (`ms.Tensor` of shape `(batch_size, num_channels, height, width)`):
            Ground truth mask for input images.

        bool_masked_pos (`ms.Tensor` of shape `(batch_size, num_patches)`):
            Boolean masked positions. Indicates which patches are masked (1) and which aren't (0).

    Returns:
        `ms.Tensor`: The mean L1 loss between the predicted masks and the ground truth masks.
    """
    ground_truth = ops.cat((prompt_masks, labels), axis=2)

    mask = bool_masked_pos[:, :, None].repeat(1, 1, self.patch_size**2 * 3)
    mask = unpatchify(
        mask, ground_truth.shape[2] // self.patch_size, ground_truth.shape[3] // self.patch_size)

    loss = ops.smooth_l1_loss(
        pred_masks, ground_truth, reduction="none", beta=self.beta)
    loss = (loss * mask).sum() / mask.sum()  # mean loss on removed patches

    return loss

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptModel

Bases: SegGptPreTrainedModel

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
class SegGptModel(SegGptPreTrainedModel):
    def __init__(self, config: SegGptConfig):
        super().__init__(config)
        self.config = config

        self.embeddings = SegGptEmbeddings(config)
        self.encoder = SegGptEncoder(config)

        # Initialize weights and apply final processing
        self.post_init()

    def get_input_embeddings(self) -> SegGptPatchEmbeddings:
        return self.embeddings.patch_embeddings

    def _prune_heads(self, heads_to_prune: Dict[int, List[int]]) -> None:
        """
        Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
        class PreTrainedModel
        """
        for layer, heads in heads_to_prune.items():
            self.encoder.layer[layer].attention.prune_heads(heads)

    def forward(
        self,
        pixel_values: ms.Tensor,
        prompt_pixel_values: ms.Tensor,
        prompt_masks: ms.Tensor,
        bool_masked_pos: Optional[ms.Tensor] = None,
        feature_ensemble: Optional[bool] = None,
        embedding_type: Optional[str] = None,
        labels: Optional[ms.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple, SegGptEncoderOutput]:
        r"""
        Args:
            labels (`ms.Tensor` of shape `(batch_size, num_channels, height, width)`, `optional`):
                Ground truth mask for input images.

        Returns:
            `Union[Tuple, SegGptEncoderOutput]`

        Example:
            ```python
            >>> from transformers import SegGptImageProcessor, SegGptModel
            >>> from PIL import Image
            >>> import requests
            ...
            >>> image_input_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_2.jpg"
            >>> image_prompt_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1.jpg"
            >>> mask_prompt_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1_target.png"
            ...
            >>> image_input = Image.open(requests.get(image_input_url, stream=True).raw)
            >>> image_prompt = Image.open(requests.get(image_prompt_url, stream=True).raw)
            >>> mask_prompt = Image.open(requests.get(mask_prompt_url, stream=True).raw).convert("L")
            ...
            >>> checkpoint = "BAAI/seggpt-vit-large"
            >>> model = SegGptModel.from_pretrained(checkpoint)
            >>> image_processor = SegGptImageProcessor.from_pretrained(checkpoint)
            ...
            >>> inputs = image_processor(images=image_input, prompt_images=image_prompt, prompt_masks=mask_prompt, return_tensors="pt")
            ...
            >>> outputs = model(**inputs)
            >>> list(outputs.last_hidden_state.shape)
            [1, 56, 28, 1024]
            ```
        """
        output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
        output_hidden_states = (
            output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
        )
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        feature_ensemble = feature_ensemble if feature_ensemble is not None else False

        expected_dtype = self.embeddings.patch_embeddings.projection.weight.dtype
        pixel_values = pixel_values.astype(expected_dtype)
        prompt_pixel_values = prompt_pixel_values.astype(expected_dtype)

        # Prepare inputs
        pixel_values = ops.cat((prompt_pixel_values, pixel_values), axis=2)
        prompt_pixel_values = (
            ops.cat((prompt_masks, prompt_masks), axis=2)
            if labels is None
            else ops.cat((prompt_masks, labels), axis=2)
        )
        prompt_pixel_values = prompt_pixel_values.astype(expected_dtype)

        if bool_masked_pos is None and labels is not None:
            logger.warning_once(
                "Labels were provided, but bool_masked_pos were not. It will be set to default value. If you're training the model, make sure to provide a bool_masked_pos."
            )

        # We concat on height axis so SegGPT can handle as a single image, hence we need to mask the portion
        # of the mask prompt pixels that will be destinated to the prediction as they don't add any information.
        # This is only the case for inference. In training, the model concat of prompt mask and label is masked
        # and reforwarded together (In-Context Painting).
        if bool_masked_pos is None:
            num_patches = self.embeddings.patch_embeddings.num_patches
            bool_masked_pos = ops.zeros(num_patches, dtype=ms.bool_)
            bool_masked_pos[num_patches // 2:] = 1
            bool_masked_pos = bool_masked_pos.unsqueeze(0)

        embedding_output = self.embeddings(
            pixel_values, prompt_pixel_values, embedding_type=embedding_type, bool_masked_pos=bool_masked_pos
        )

        encoder_outputs = self.encoder(
            embedding_output,
            feature_ensemble=feature_ensemble,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        return encoder_outputs

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptModel.forward(pixel_values, prompt_pixel_values, prompt_masks, bool_masked_pos=None, feature_ensemble=None, embedding_type=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
labels

Ground truth mask for input images.

TYPE: `ms.Tensor` of shape `(batch_size, num_channels, height, width)`, `optional` DEFAULT: None

RETURNS DESCRIPTION
Union[Tuple, SegGptEncoderOutput]

Union[Tuple, SegGptEncoderOutput]

Example
>>> from transformers import SegGptImageProcessor, SegGptModel
>>> from PIL import Image
>>> import requests
...
>>> image_input_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_2.jpg"
>>> image_prompt_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1.jpg"
>>> mask_prompt_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1_target.png"
...
>>> image_input = Image.open(requests.get(image_input_url, stream=True).raw)
>>> image_prompt = Image.open(requests.get(image_prompt_url, stream=True).raw)
>>> mask_prompt = Image.open(requests.get(mask_prompt_url, stream=True).raw).convert("L")
...
>>> checkpoint = "BAAI/seggpt-vit-large"
>>> model = SegGptModel.from_pretrained(checkpoint)
>>> image_processor = SegGptImageProcessor.from_pretrained(checkpoint)
...
>>> inputs = image_processor(images=image_input, prompt_images=image_prompt, prompt_masks=mask_prompt, return_tensors="pt")
...
>>> outputs = model(**inputs)
>>> list(outputs.last_hidden_state.shape)
[1, 56, 28, 1024]
Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
def forward(
    self,
    pixel_values: ms.Tensor,
    prompt_pixel_values: ms.Tensor,
    prompt_masks: ms.Tensor,
    bool_masked_pos: Optional[ms.Tensor] = None,
    feature_ensemble: Optional[bool] = None,
    embedding_type: Optional[str] = None,
    labels: Optional[ms.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple, SegGptEncoderOutput]:
    r"""
    Args:
        labels (`ms.Tensor` of shape `(batch_size, num_channels, height, width)`, `optional`):
            Ground truth mask for input images.

    Returns:
        `Union[Tuple, SegGptEncoderOutput]`

    Example:
        ```python
        >>> from transformers import SegGptImageProcessor, SegGptModel
        >>> from PIL import Image
        >>> import requests
        ...
        >>> image_input_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_2.jpg"
        >>> image_prompt_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1.jpg"
        >>> mask_prompt_url = "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1_target.png"
        ...
        >>> image_input = Image.open(requests.get(image_input_url, stream=True).raw)
        >>> image_prompt = Image.open(requests.get(image_prompt_url, stream=True).raw)
        >>> mask_prompt = Image.open(requests.get(mask_prompt_url, stream=True).raw).convert("L")
        ...
        >>> checkpoint = "BAAI/seggpt-vit-large"
        >>> model = SegGptModel.from_pretrained(checkpoint)
        >>> image_processor = SegGptImageProcessor.from_pretrained(checkpoint)
        ...
        >>> inputs = image_processor(images=image_input, prompt_images=image_prompt, prompt_masks=mask_prompt, return_tensors="pt")
        ...
        >>> outputs = model(**inputs)
        >>> list(outputs.last_hidden_state.shape)
        [1, 56, 28, 1024]
        ```
    """
    output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
    output_hidden_states = (
        output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
    )
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict
    feature_ensemble = feature_ensemble if feature_ensemble is not None else False

    expected_dtype = self.embeddings.patch_embeddings.projection.weight.dtype
    pixel_values = pixel_values.astype(expected_dtype)
    prompt_pixel_values = prompt_pixel_values.astype(expected_dtype)

    # Prepare inputs
    pixel_values = ops.cat((prompt_pixel_values, pixel_values), axis=2)
    prompt_pixel_values = (
        ops.cat((prompt_masks, prompt_masks), axis=2)
        if labels is None
        else ops.cat((prompt_masks, labels), axis=2)
    )
    prompt_pixel_values = prompt_pixel_values.astype(expected_dtype)

    if bool_masked_pos is None and labels is not None:
        logger.warning_once(
            "Labels were provided, but bool_masked_pos were not. It will be set to default value. If you're training the model, make sure to provide a bool_masked_pos."
        )

    # We concat on height axis so SegGPT can handle as a single image, hence we need to mask the portion
    # of the mask prompt pixels that will be destinated to the prediction as they don't add any information.
    # This is only the case for inference. In training, the model concat of prompt mask and label is masked
    # and reforwarded together (In-Context Painting).
    if bool_masked_pos is None:
        num_patches = self.embeddings.patch_embeddings.num_patches
        bool_masked_pos = ops.zeros(num_patches, dtype=ms.bool_)
        bool_masked_pos[num_patches // 2:] = 1
        bool_masked_pos = bool_masked_pos.unsqueeze(0)

    embedding_output = self.embeddings(
        pixel_values, prompt_pixel_values, embedding_type=embedding_type, bool_masked_pos=bool_masked_pos
    )

    encoder_outputs = self.encoder(
        embedding_output,
        feature_ensemble=feature_ensemble,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    return encoder_outputs

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptPatchEmbeddings

Bases: Module

This class turns pixel_values of shape (batch_size, num_channels, height, width) into the initial hidden_states (patch embeddings) of shape (batch_size, seq_length, hidden_size) to be consumed by a Transformer.

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
class SegGptPatchEmbeddings(nn.Module):
    """
    This class turns `pixel_values` of shape `(batch_size, num_channels, height, width)` into the initial
    `hidden_states` (patch embeddings) of shape `(batch_size, seq_length, hidden_size)` to be consumed by a
    Transformer.
    """

    def __init__(self, config):
        super().__init__()
        image_size, patch_size = config.image_size, config.patch_size
        num_channels, hidden_size = config.num_channels, config.hidden_size
        image_size = image_size if isinstance(
            image_size, collections.abc.Iterable) else (image_size, image_size)
        patch_size = patch_size if isinstance(
            patch_size, collections.abc.Iterable) else (patch_size, patch_size)
        num_patches = (image_size[1] // patch_size[1]) * \
            (image_size[0] // patch_size[0])
        self.image_size = image_size
        self.patch_size = patch_size
        self.num_channels = num_channels
        self.num_patches = num_patches

        self.projection = nn.Conv2d(
            num_channels, hidden_size, kernel_size=patch_size, stride=patch_size, bias=True, pad_mode='pad', padding=0)

    def forward(self, pixel_values):
        batch_size, num_channels, height, width = pixel_values.shape
        if num_channels != self.num_channels:
            raise ValueError(
                "Make sure that the channel dimension of the pixel values match with the one set in the configuration."
            )
        if height != self.image_size[0] or width != self.image_size[1]:
            raise ValueError(
                f"Input image size ({height}*{width}) doesn't match model ({self.image_size[0]}*{self.image_size[1]})."
            )
        embeddings = self.projection(pixel_values).permute(0, 2, 3, 1)
        return embeddings

mindnlp.transformers.models.seggpt.modeling_seggpt.SegGptPreTrainedModel

Bases: PreTrainedModel

An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
class SegGptPreTrainedModel(PreTrainedModel):
    """
    An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
    models.
    """

    config_class = SegGptConfig
    base_model_prefix = "model"
    main_input_name = "pixel_values"
    supports_gradient_checkpointing = True
    _no_split_modules = ["SegGptEmbeddings", "SegGptLayer"]

    def _init_weights(self, cell: Union[nn.Linear, nn.Conv2d, nn.LayerNorm]) -> None:
        """Initialize the weights"""
        std = self.config.initializer_range
        if isinstance(cell, (nn.Linear, nn.Conv2d)):
            cell.weight.data.initialize(Normal(std))
            if cell.bias is not None:
                cell.bias.initialize('zeros')

        elif isinstance(cell, nn.LayerNorm):
            cell.bias.initialize('zeros')
            cell.weight.data.fill(1.0)

        elif isinstance(cell, SegGptAttention):
            cell.rel_pos_h.data.initialize(Normal(std))
            cell.rel_pos_w.data.initialize(Normal(std))

        elif isinstance(cell, SegGptEmbeddings):
            cell.position_embeddings.data.initialize(Normal(std))

            cell.mask_token.data.initialize(Normal(std))
            cell.segment_token_input.data.initialize(Normal(std))
            cell.segment_token_prompt.data.initialize(Normal(std))
            cell.type_token_semantic.data.initialize(Normal(std))
            cell.type_token_instance.data.initialize(Normal(std))

mindnlp.transformers.models.seggpt.modeling_seggpt.drop_path(input, drop_prob=0.0, training=False)

Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).

Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks, however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the argument.

Source code in mindnlp/transformers/models/seggpt/modeling_seggpt.py
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
def drop_path(input: ms.Tensor, drop_prob: float = 0.0, training: bool = False) -> ms.Tensor:
    """
    Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).

    Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
    however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
    See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the
    layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the
    argument.
    """
    if drop_prob == 0.0 or not training:
        return input
    keep_prob = 1 - drop_prob
    # work with diff dim tensors, not just 2D ConvNets
    shape = (input.shape[0],) + (1,) * (input.ndim - 1)
    random_tensor = keep_prob + ops.rand(shape, dtype=input.dtype)
    random_tensor = random_tensor.floor()  # binarize
    output = input.div(keep_prob) * random_tensor
    return output