.\" DO NOT MODIFY THIS FILE! It was generated by help2man 1.45.1. .TH VW "1" "May 2014" "vw 7.3.0" "User Commands" .SH NAME vw \- Vowpal Wabbit -- fast online learning tool .SH DESCRIPTION .SS "VW options:" .TP \fB\-h\fR [ \fB\-\-help\fR ] Look here: http://hunch.net/~vw/ and click on Tutorial. .TP \fB\-\-active_learning\fR active learning mode .TP \fB\-\-active_simulation\fR active learning simulation mode .TP \fB\-\-active_mellowness\fR arg active learning mellowness parameter c_0. Default 8 .TP \fB\-\-binary\fR report loss as binary classification on \fB\-1\fR,1 .TP \fB\-\-autolink\fR arg create link function with polynomial d .TP \fB\-\-sgd\fR use regular stochastic gradient descent update. .TP \fB\-\-adaptive\fR use adaptive, individual learning rates. .TP \fB\-\-invariant\fR use safe/importance aware updates. .TP \fB\-\-normalized\fR use per feature normalized updates .TP \fB\-\-exact_adaptive_norm\fR use current default invariant normalized adaptive update rule .TP \fB\-a\fR [ \fB\-\-audit\fR ] print weights of features .TP \fB\-b\fR [ \fB\-\-bit_precision\fR ] arg number of bits in the feature table .TP \fB\-\-bfgs\fR use bfgs optimization .TP \fB\-c\fR [ \fB\-\-cache\fR ] Use a cache. The default is .cache .TP \fB\-\-cache_file\fR arg The location(s) of cache_file. .TP \fB\-\-compressed\fR use gzip format whenever possible. If a cache file is being created, this option creates a compressed cache file. A mixture of raw\-text & compressed inputs are supported with autodetection. .TP \fB\-\-no_stdin\fR do not default to reading from stdin .TP \fB\-\-conjugate_gradient\fR use conjugate gradient based optimization .TP \fB\-\-csoaa\fR arg Use one\-against\-all multiclass learning with costs .TP \fB\-\-wap\fR arg Use weighted all\-pairs multiclass learning with costs .TP \fB\-\-csoaa_ldf\fR arg Use one\-against\-all multiclass learning with label dependent features. Specify singleline or multiline. .TP \fB\-\-wap_ldf\fR arg Use weighted all\-pairs multiclass learning with label dependent features. .IP Specify singleline or multiline. .TP \fB\-\-cb\fR arg Use contextual bandit learning with costs .TP \fB\-\-l1\fR arg l_1 lambda .TP \fB\-\-l2\fR arg l_2 lambda .TP \fB\-d\fR [ \fB\-\-data\fR ] arg Example Set .TP \fB\-\-daemon\fR persistent daemon mode on port 26542 .TP \fB\-\-num_children\fR arg number of children for persistent daemon mode .TP \fB\-\-pid_file\fR arg Write pid file in persistent daemon mode .TP \fB\-\-decay_learning_rate\fR arg Set Decay factor for learning_rate between passes .TP \fB\-\-input_feature_regularizer\fR arg Per feature regularization input file .TP \fB\-f\fR [ \fB\-\-final_regressor\fR ] arg Final regressor .TP \fB\-\-readable_model\fR arg Output human\-readable final regressor .TP \fB\-\-hash\fR arg how to hash the features. Available options: strings, all .TP \fB\-\-hessian_on\fR use second derivative in line search .TP \fB\-\-version\fR Version information .TP \fB\-\-ignore\fR arg ignore namespaces beginning with character .TP \fB\-\-keep\fR arg keep namespaces beginning with character .TP \fB\-k\fR [ \fB\-\-kill_cache\fR ] do not reuse existing cache: create a new one always .TP \fB\-\-initial_weight\fR arg Set all weights to an initial value of 1. .TP \fB\-i\fR [ \fB\-\-initial_regressor\fR ] arg Initial regressor(s) .TP \fB\-\-initial_pass_length\fR arg initial number of examples per pass .TP \fB\-\-initial_t\fR arg initial t value .TP \fB\-\-lda\fR arg Run lda with topics .TP \fB\-\-span_server\fR arg Location of server for setting up spanning tree .TP \fB\-\-min_prediction\fR arg Smallest prediction to output .TP \fB\-\-max_prediction\fR arg Largest prediction to output .TP \fB\-\-mem\fR arg memory in bfgs .TP \fB\-\-nn\fR arg Use sigmoidal feedforward network with hidden units .TP \fB\-\-noconstant\fR Don't add a constant feature .TP \fB\-\-noop\fR do no learning .TP \fB\-\-oaa\fR arg Use one\-against\-all multiclass learning with labels .TP \fB\-\-ect\fR arg Use error correcting tournament with labels .TP \fB\-\-output_feature_regularizer_binary\fR arg Per feature regularization output file .TP \fB\-\-output_feature_regularizer_text\fR arg Per feature regularization output file, in text .TP \fB\-\-port\fR arg port to listen on .TP \fB\-\-power_t\fR arg t power value .TP \fB\-l\fR [ \fB\-\-learning_rate\fR ] arg Set Learning Rate .TP \fB\-\-passes\fR arg Number of Training Passes .TP \fB\-\-termination\fR arg Termination threshold .TP \fB\-p\fR [ \fB\-\-predictions\fR ] arg File to output predictions to .TP \fB\-q\fR [ \fB\-\-quadratic\fR ] arg Create and use quadratic features .TP \fB\-\-cubic\fR arg Create and use cubic features .TP \fB\-\-quiet\fR Don't output diagnostics .TP \fB\-\-rank\fR arg rank for matrix factorization. .TP \fB\-\-random_weights\fR arg make initial weights random .TP \fB\-\-random_seed\fR arg seed random number generator .TP \fB\-r\fR [ \fB\-\-raw_predictions\fR ] arg File to output unnormalized predictions to .TP \fB\-\-ring_size\fR arg size of example ring .TP \fB\-\-examples\fR arg number of examples to parse .TP \fB\-\-save_per_pass\fR Save the model after every pass over data .TP \fB\-\-save_resume\fR save extra state so learning can be resumed later with new data .TP \fB\-\-sendto\fR arg send examples to .TP \fB\-\-searn\fR arg use searn, argument=maximum action id .TP \fB\-\-searnimp\fR arg use searn, argument=maximum action id or 0 for LDF .TP \fB\-t\fR [ \fB\-\-testonly\fR ] Ignore label information and just test .TP \fB\-\-loss_function\fR arg (=squared) Specify the loss function to be used, uses squared by default. Currently available ones are squared, classic, hinge, logistic and quantile. .TP \fB\-\-quantile_tau\fR arg (=0.5) Parameter \etau associated with Quantile loss. Defaults to 0.5 .TP \fB\-\-unique_id\fR arg unique id used for cluster parallel jobs .TP \fB\-\-total\fR arg total number of nodes used in cluster parallel job .TP \fB\-\-node\fR arg node number in cluster parallel job .TP \fB\-\-sort_features\fR turn this on to disregard order in which features have been defined. This will lead to smaller cache sizes .TP \fB\-\-ngram\fR arg Generate N grams .TP \fB\-\-skips\fR arg Generate skips in N grams. This in conjunction with the ngram tag can be used to generate generalized n\-skip\-k\-gram.