seq2seq
mindnlp.engine.train_args.seq2seq.Seq2SeqTrainingArguments
dataclass
¶
Bases: TrainingArguments
PARAMETER | DESCRIPTION |
---|---|
sortish_sampler |
Whether to use a sortish sampler or not. Only possible if the underlying datasets are Seq2SeqDataset for now but will become generally available in the near future. It sorts the inputs according to lengths in order to minimize the padding size, with a bit of randomness for the training set.
TYPE:
|
predict_with_generate |
Whether to use generate to calculate generative metrics (ROUGE, BLEU).
TYPE:
|
generation_max_length |
The
TYPE:
|
generation_num_beams |
The
TYPE:
|
generation_config |
Allows to load a [
TYPE:
|
Source code in mindnlp/engine/train_args/seq2seq.py
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
|
mindnlp.engine.train_args.seq2seq.Seq2SeqTrainingArguments.to_dict()
¶
Serializes this instance while replace Enum
by their values and GenerationConfig
by dictionaries (for JSON
serialization support). It obfuscates the token values by removing their value.
Source code in mindnlp/engine/train_args/seq2seq.py
83 84 85 86 87 88 89 90 91 92 93 |
|