Skip to content
Snippets Groups Projects
Commit 2478b101 authored by friebolin's avatar friebolin
Browse files

Update

parent 17c9effa
No related branches found
No related tags found
No related merge requests found
......@@ -190,7 +190,7 @@ For `<COMMAND>` you must enter one of the commands you find in the list below, w
|🔛 **`--tokenizer`**|Which tokenizer to use when preprocessing the datasets.|Choose `swp` for our tokenizer, `li ` for the tokenizer of Li et al. [^6], or `salami` for the tokenizer used by another [student project](https://gitlab.cl.uni-heidelberg.de/salami-hd/salami/-/tree/master/)|
|**`-tc`**/**`--tcontext`**|Whether or not to preprocess the training set with context.||
|**`-vc`**/**`--vcontext`**|Whether or not to preprocess the test set with context.||
|🔛 **`-max`**/**`--max_length`**|Defines the maximum sequence length when tokenizing the sentences.|⚠️ Always choose 256 for *TMix* and 512 for the other models.|
|🔛 **`-max`**/**`--max_length`**|Defines the maximum sequence length when tokenizing the sentences.|Typically choose $256$ or $512$.|
|🔛 **`--train_loop`**|Defines which train loop to use.|Choose `swp` for our train loop implementation and `salami` for the one of the [salami](https://gitlab.cl.uni-heidelberg.de/salami-hd/salami) student project.|
|🔛 **`-e`**/**`--epochs`**|Number of epochs for training.||
|🔛 **`-lr`**/**`--learning_rate`**|Learning rate for training.|`type=float`|
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment