Prompt tuning
mindnlp.peft.tuners.prompt_tuning.config
¶
prompt tuning config.
mindnlp.peft.tuners.prompt_tuning.config.PromptTuningConfig
dataclass
¶
Bases: PromptLearningConfig
This is the configuration class to store the configuration of a [PromptEmbedding
].
PARAMETER | DESCRIPTION |
---|---|
prompt_tuning_init |
The initialization of the prompt embedding.
TYPE:
|
prompt_tuning_init_text |
The text to initialize the prompt embedding. Only used if
TYPE:
|
tokenizer_name_or_path |
The name or path of the tokenizer. Only used if
TYPE:
|
tokenizer_kwargs |
The keyword arguments to pass to
TYPE:
|
Source code in mindnlp/peft/tuners/prompt_tuning/config.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 |
|
mindnlp.peft.tuners.prompt_tuning.config.PromptTuningConfig.__post_init__()
¶
This method initializes the PromptTuningConfig object after its creation.
PARAMETER | DESCRIPTION |
---|---|
self |
The instance of the PromptTuningConfig class.
|
RETURNS | DESCRIPTION |
---|---|
None. This method does not return any value. |
RAISES | DESCRIPTION |
---|---|
-ValueError
|
If the prompt_tuning_init is set to TEXT and tokenizer_name_or_path is not provided. |
-ValueError
|
If the prompt_tuning_init is set to TEXT and prompt_tuning_init_text is not provided. |
-ValueError
|
If tokenizer_kwargs is provided but prompt_tuning_init is not set to TEXT. |
Source code in mindnlp/peft/tuners/prompt_tuning/config.py
167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 |
|
mindnlp.peft.tuners.prompt_tuning.config.PromptTuningInit
¶
Bases: str
, Enum
Represents an initialization state for prompt tuning in a Python class named 'PromptTuningInit'. This class inherits from the 'str' class and the 'enum.Enum' class.
PromptTuningInit is used to define and manage the initialization state for prompt tuning. It provides functionality to set and retrieve the initialization state, and inherits all the methods and attributes of the 'str' class and the 'enum.Enum' class.
Inherited Attributes from the 'str' class: - capitalize() - casefold() - center() - count() - encode() - endswith() - expandtabs() - find() - format() - format_map() - index() - isalnum() - isalpha() - isascii() - isdecimal() - isdigit() - isidentifier() - islower() - isnumeric() - isprintable() - isspace() - istitle() - isupper() - join() - ljust() - lower() - lstrip() - maketrans() - partition() - replace() - rfind() - rindex() - rjust() - rpartition() - rsplit() - rstrip() - split() - splitlines() - startswith() - strip() - swapcase() - title() - translate() - upper() - zfill()
Inherited Attributes from the 'enum.Enum' class: - name - value
Inherited Methods from the 'enum.Enum' class: - class - contains - delattr - dir - eq - format - ge - getattribute - getitem - gt - hash - init - init_subclass - iter - le - len - lt - members - module - ne - new - reduce - reduce_ex - repr - setattr - sizeof - str - subclasshook
Source code in mindnlp/peft/tuners/prompt_tuning/config.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
|
mindnlp.peft.tuners.prompt_tuning.model
¶
prompt tuning model
mindnlp.peft.tuners.prompt_tuning.model.PromptEmbedding
¶
Bases: Module
The model to encode virtual tokens into prompt embeddings.
PARAMETER | DESCRIPTION |
---|---|
config |
The configuration of the prompt embedding.
TYPE:
|
word_embeddings |
The word embeddings of the base transformer model.
TYPE:
|
Attributes:
- embedding (nn.Embedding
) -- The embedding layer of the prompt embedding.
Example:
>>> from peft import PromptEmbedding, PromptTuningConfig
>>> config = PromptTuningConfig(
... peft_type="PROMPT_TUNING",
... task_type="SEQ_2_SEQ_LM",
... num_virtual_tokens=20,
... token_dim=768,
... num_transformer_submodules=1,
... num_attention_heads=12,
... num_layers=12,
... prompt_tuning_init="TEXT",
... prompt_tuning_init_text="Predict if sentiment of this review is positive, negative or neutral",
... tokenizer_name_or_path="t5-base",
... )
>>> # t5_model.shared is the word embeddings of the base model
>>> prompt_embedding = PromptEmbedding(config, t5_model.shared)
Input Shape: (batch_size
, total_virtual_tokens
)
Output Shape: (batch_size
, total_virtual_tokens
, token_dim
)
Source code in mindnlp/peft/tuners/prompt_tuning/model.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 |
|
mindnlp.peft.tuners.prompt_tuning.model.PromptEmbedding.__init__(config, word_embeddings)
¶
Initialize the PromptEmbedding class.
PARAMETER | DESCRIPTION |
---|---|
self |
Reference to the current instance of the class.
|
config |
Configuration object containing various settings. - num_virtual_tokens (int): Number of virtual tokens. - num_transformer_subcells (int): Number of transformer subcells. - token_dim (int): Dimensionality of the token embeddings. - prompt_tuning_init (Enum): Specifies the type of prompt tuning initialization. - inference_mode (bool): Indicates if the model is in inference mode. - tokenizer_kwargs (dict, optional): Additional keyword arguments for the tokenizer. - tokenizer_name_or_path (str): Name or path of the pretrained tokenizer. - prompt_tuning_init_text (str): Text used for prompt tuning initialization.
TYPE:
|
word_embeddings |
Word embeddings for initializing the embedding layer.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None. The method initializes the embedding layer with the provided word embeddings. |
RAISES | DESCRIPTION |
---|---|
ImportError
|
If the transformers module cannot be imported. |
ValueError
|
If the number of text tokens exceeds the total virtual tokens. |
TypeError
|
If the word embedding weights cannot be converted to float32. |
Source code in mindnlp/peft/tuners/prompt_tuning/model.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
|
mindnlp.peft.tuners.prompt_tuning.model.PromptEmbedding.forward(indices)
¶
Construct the prompt embeddings based on the given indices.
PARAMETER | DESCRIPTION |
---|---|
self |
An instance of the PromptEmbedding class.
TYPE:
|
indices |
The indices used to retrieve the prompt embeddings.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
None
|
This method does not return any value. |
RAISES | DESCRIPTION |
---|---|
None
|
This method does not raise any exceptions. |
Source code in mindnlp/peft/tuners/prompt_tuning/model.py
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 |
|