.\"Text automatically generated by txt2man .TH sparse_coding "1" "" "" .SH NAME \fBsparse_coding \fP- sparse coding .SH SYNOPSIS .nf .fam C \fBsparse_coding\fP [\fB-h\fP] [\fB-v\fP] \fB-k\fP \fIint\fP \fB-i\fP \fIstring\fP [\fB-c\fP \fIstring\fP] [\fB-d\fP \fIstring\fP] [\fB-D\fP \fIstring\fP] [\fB-l\fP \fIdouble\fP] [\fB-L\fP \fIdouble\fP] [\fB-n\fP \fIint\fP] [\fB-w\fP \fIdouble\fP] [\fB-N\fP] [\fB-o\fP \fIdouble\fP] [\fB-s\fP \fIint\fP] \fB-V\fP .fam T .fi .fam T .fi .SH DESCRIPTION An implementation of Sparse Coding with Dictionary Learning, which achieves sparsity via an l1-norm regularizer on the codes (LASSO) or an (l1+l2)\fB-norm\fP regularizer on the codes (the Elastic Net). Given a dense data matrix X with n points and d dimensions, sparse coding seeks to find a dense dictionary matrix D with k atoms in d dimensions, and a sparse coding matrix Z with n points in k dimensions. .PP The original data matrix X can then be reconstructed as D * Z. Therefore, this program finds a representation of each point in X as a sparse linear combination of atoms in the dictionary D. .PP The sparse coding is found with an algorithm which alternates between a dictionary step, which updates the dictionary D, and a sparse coding step, which updates the sparse coding matrix. .PP To run this program, the input matrix X must be specified (with \fB-i\fP), along with the number of atoms in the dictionary (\fB-k\fP). An initial dictionary may also be specified with the \fB--initial_dictionary\fP option. The l1 and l2 norm regularization parameters may be specified with \fB-l\fP and \fB-L\fP, respectively. For example, to run sparse coding on the dataset in data.csv using 200 atoms and an l1-regularization parameter of 0.1, saving the dictionary into dict.csv and the codes into codes.csv, use .PP $ \fBsparse_coding\fP \fB-i\fP data.csv \fB-k\fP 200 \fB-l\fP 0.1 \fB-d\fP dict.csv \fB-c\fP codes.csv .PP The maximum number of iterations may be specified with the \fB-n\fP option. Optionally, the input data matrix X can be normalized before coding with the \fB-N\fP option. .SH REQUIRED OPTIONS .TP .B \fB--atoms\fP (\fB-k\fP) [\fIint\fP] Number of atoms in the dictionary. .TP .B \fB--input_file\fP (\fB-i\fP) [\fIstring\fP] Filename of the input data. .SH OPTIONS .TP .B \fB--codes_file\fP (\fB-c\fP) [\fIstring\fP] Filename to save the output sparse codes to. Default value 'codes.csv'. .TP .B \fB--dictionary_file\fP (\fB-d\fP) [\fIstring\fP] Filename to save the output dictionary to. Default value 'dictionary.csv'. .TP .B \fB--help\fP (\fB-h\fP) Default help info. .TP .B \fB--info\fP [\fIstring\fP] Get help on a specific module or option. Default value ''. .TP .B \fB--initial_dictionary\fP (\fB-D\fP) [\fIstring\fP] Filename for optional initial dictionary. Default value ''. .TP .B \fB--lambda1\fP (\fB-l\fP) [\fIdouble\fP] Sparse coding l1-norm regularization parameter. Default value 0. .TP .B \fB--lambda2\fP (\fB-L\fP) [\fIdouble\fP] Sparse coding l2-norm regularization parameter. Default value 0. .TP .B \fB--max_iterations\fP (\fB-n\fP) [\fIint\fP] Maximum number of iterations for sparse coding (0 indicates no limit). Default value 0. .TP .B \fB--newton_tolerance\fP (\fB-w\fP) [\fIdouble\fP] Tolerance for convergence of Newton method. Default value 1e-06. .TP .B \fB--normalize\fP (\fB-N\fP) If set, the input data matrix will be normalized before coding. .TP .B \fB--objective_tolerance\fP (\fB-o\fP) [\fIdouble\fP] Tolerance for convergence of the objective function. Default value 0.01. .TP .B \fB--seed\fP (\fB-s\fP) [\fIint\fP] Random seed. If 0, 'std::time(NULL)' is used. Default value 0. .TP .B \fB--verbose\fP (\fB-v\fP) Display informational messages and the full list of parameters and timers at the end of execution. .TP .B \fB--version\fP (\fB-V\fP) Display the version of mlpack. .SH ADDITIONAL INFORMATION For further information, including relevant papers, citations, and theory, consult the documentation found at http://www.mlpack.org or included with your distribution of MLPACK.