Scroll to navigation

SAHARACLIENT(1) Sahara Client SAHARACLIENT(1)

NAME

saharaclient - Sahara Client
This is a client for OpenStack Sahara API. There's a Python API (the saharaclient module), and a command-line utility (installed as an OpenStackClient plugin). Each implements the entire OpenStack Sahara API.
You'll need credentials for an OpenStack cloud that implements the Data Processing API, in order to use the sahara client.
You may want to read the OpenStack Sahara Docs -- the overview, at least -- to get an idea of the concepts. By understanding the concepts this library should make more sense.


Contents:

SAHARA CLIENT

Overview

Sahara Client provides a list of Python interfaces to communicate with the Sahara REST API. Sahara Client enables users to perform most of the existing operations like retrieving template lists, creating Clusters, submitting EDP Jobs, etc.

Instantiating a Client

To start using the Sahara Client users have to create an instance of the Client class. The client constructor has a list of parameters to authenticate and locate Sahara endpoint.
class saharaclient.api.client.Client(username=None, api_key=None, project_id=None, project_name=None, auth_url=None, sahara_url=None, endpoint_type='publicURL', service_type='data-processing', input_auth_token=None, session=None, auth=None, insecure=False, cacert=None, region_name=None, **kwargs)
Client for the OpenStack Data Processing v1 API.
Parameters
username (str) -- Username for Keystone authentication.
api_key (str) -- Password for Keystone authentication.
project_id (str) -- Keystone Tenant id.
project_name (str) -- Keystone Tenant name.
auth_url (str) -- Keystone URL that will be used for authentication.
sahara_url (str) -- Sahara REST API URL to communicate with.
endpoint_type (str) -- Desired Sahara endpoint type.
service_type (str) -- Sahara service name in Keystone catalog.
input_auth_token (str) -- Keystone authorization token.
session -- Keystone Session object.
auth -- Keystone Authentication Plugin object.
insecure (boolean) -- Allow insecure.
cacert (string) -- Path to the Privacy Enhanced Mail (PEM) file which contains certificates needed to establish SSL connection with the identity service.
region_name (string) -- Name of a region to select when choosing an endpoint from the service catalog.



Important!
It is not a mandatory rule to provide all of the parameters above. The minimum number should be enough to determine Sahara endpoint, check user authentication and tenant to operate in.

Authentication check

Passing authentication parameters to Sahara Client is deprecated. Keystone Session object should be used for this purpose. For example:
from keystoneauth1.identity import v2
from keystoneauth1 import session
from saharaclient import client
auth = v2.Password(auth_url=AUTH_URL, username=USERNAME, password=PASSWORD, tenant_name=PROJECT_ID)
ses = session.Session(auth=auth)
sahara = client.Client('1.1', session=ses)


For more information about Keystone Sessions, see Using Sessions.

Sahara endpoint discovery

If user has a direct URL pointing to Sahara REST API, it may be specified as sahara_url. If this parameter is missing, Sahara client will use Keystone Service Catalog to find the endpoint. There are two parameters: service_type and endpoint_type to configure endpoint search. Both parameters have default values.
from keystoneauth1.identity import v2
from keystoneauth1 import session
from saharaclient import client
auth = v2.Password(auth_url=AUTH_URL, username=USERNAME, password=PASSWORD, tenant_name=PROJECT_ID)
ses = session.Session(auth=auth)
sahara = client.Client('1.1', session=ses, service_type="non-default-service-type", endpoint_type="internalURL")


Object managers

Sahara Client has a list of fields to operate with:
plugins
clusters
cluster_templates
node_group_templates
images
data_sources
job_binaries
job_binary_internals
job_executions
job_types



Each of this fields is a reference to a Manager for a corresponding group of REST calls.

Supported operations

Plugin ops

class saharaclient.api.plugins.PluginManager(api)
convert_to_cluster_template(plugin_name, hadoop_version, template_name, filecontent)
Convert to cluster template
Create Cluster Template directly, avoiding Cluster Template mechanism.

get(plugin_name)
Get information about a Plugin.

get_version_details(plugin_name, hadoop_version)
Get version details
Get the list of Services and Service Parameters for a specified Plugin and Plugin Version.

list(search_opts=None)
Get a list of Plugins.


Image Registry ops

class saharaclient.api.images.ImageManager(api)
get(id)
Get information about an image

list(search_opts=None)
Get a list of registered images.

unregister_image(image_id)
Remove an Image from Sahara Image Registry.

update_image(image_id, user_name, desc=None)
Create or update an Image in Image Registry.

update_tags(image_id, new_tags)
Update an Image tags.
Parameters
new_tags (list) -- list of tags that will replace currently assigned tags



Node Group Template ops

class saharaclient.api.node_group_templates.NodeGroupTemplateManager(api)
create(name, plugin_name, hadoop_version, flavor_id, description=None, volumes_per_node=None, volumes_size=None, node_processes=None, node_configs=None, floating_ip_pool=None, security_groups=None, auto_security_group=None, availability_zone=None, volumes_availability_zone=None, volume_type=None, image_id=None, is_proxy_gateway=None, volume_local_to_instance=None, use_autoconfig=None, shares=None, is_public=None, is_protected=None, volume_mount_prefix=None)
Create a Node Group Template.

delete(ng_template_id)
Delete a Node Group Template.

get(ng_template_id)
Get information about a Node Group Template.

list(search_opts=None)
Get a list of Node Group Templates.

update(ng_template_id, name=NotUpdated, plugin_name=NotUpdated, hadoop_version=NotUpdated, flavor_id=NotUpdated, description=NotUpdated, volumes_per_node=NotUpdated, volumes_size=NotUpdated, node_processes=NotUpdated, node_configs=NotUpdated, floating_ip_pool=NotUpdated, security_groups=NotUpdated, auto_security_group=NotUpdated, availability_zone=NotUpdated, volumes_availability_zone=NotUpdated, volume_type=NotUpdated, image_id=NotUpdated, is_proxy_gateway=NotUpdated, volume_local_to_instance=NotUpdated, use_autoconfig=NotUpdated, shares=NotUpdated, is_public=NotUpdated, is_protected=NotUpdated, volume_mount_prefix=NotUpdated)
Update a Node Group Template.


Cluster Template ops

class saharaclient.api.cluster_templates.ClusterTemplateManager(api)
create(name, plugin_name, hadoop_version, description=None, cluster_configs=None, node_groups=None, anti_affinity=None, net_id=None, default_image_id=None, use_autoconfig=None, shares=None, is_public=None, is_protected=None)
Create a Cluster Template.

delete(cluster_template_id)
Delete a Cluster Template.

get(cluster_template_id)
Get information about a Cluster Template.

list(search_opts=None)
Get list of Cluster Templates.

update(cluster_template_id, name=NotUpdated, plugin_name=NotUpdated, hadoop_version=NotUpdated, description=NotUpdated, cluster_configs=NotUpdated, node_groups=NotUpdated, anti_affinity=NotUpdated, net_id=NotUpdated, default_image_id=NotUpdated, use_autoconfig=NotUpdated, shares=NotUpdated, is_public=NotUpdated, is_protected=NotUpdated)
Update a Cluster Template.


Cluster ops

class saharaclient.api.clusters.ClusterManager(api)
create(name, plugin_name, hadoop_version, cluster_template_id=None, default_image_id=None, is_transient=None, description=None, cluster_configs=None, node_groups=None, user_keypair_id=None, anti_affinity=None, net_id=None, count=None, use_autoconfig=None, shares=None, is_public=None, is_protected=None)
Launch a Cluster.

delete(cluster_id)
Delete a Cluster.

get(cluster_id, show_progress=False)
Get information about a Cluster.

list(search_opts=None)
Get a list of Clusters.

scale(cluster_id, scale_object)
Scale an existing Cluster.
Parameters
scale_object -- dict that describes scaling operation
Example

The following scale_object can be used to change the number of instances in the node group and add instances of new node group to existing cluster:
{
    "add_node_groups": [
        {
            "count": 3,
            "name": "new_ng",
            "node_group_template_id": "ngt_id"
        }
    ],
    "resize_node_groups": [
        {
            "count": 2,
            "name": "old_ng"
        }
    ]
}



update(cluster_id, name=NotUpdated, description=NotUpdated, is_public=NotUpdated, is_protected=NotUpdated, shares=NotUpdated)
Update a Cluster.

verification_update(cluster_id, status)
Start a verification for a Cluster.


Data Source ops

class saharaclient.api.data_sources.DataSourceManager(api)
create(name, description, data_source_type, url, credential_user=None, credential_pass=None, is_public=None, is_protected=None)
Create a Data Source.

delete(data_source_id)
Delete a Data Source.

get(data_source_id)
Get information about a Data Source.

list(search_opts=None)
Get a list of Data Sources.

update(data_source_id, update_data)
Update a Data Source.
Parameters
update_data (dict) -- dict that contains fields that should be updated with new values.

Fields that can be updated:
name
description
type
url
is_public
is_protected
credentials - dict with user and password keyword arguments



Job Binary Internal ops

class saharaclient.api.job_binary_internals.JobBinaryInternalsManager(api)
create(name, data)
Create a Job Binary Internal.
Parameters
data (str) -- raw data ot script text


update(job_binary_id, name=NotUpdated, is_public=NotUpdated, is_protected=NotUpdated)
Update a Job Binary Internal.


Job Binary ops

class saharaclient.api.job_binaries.JobBinariesManager(api)
create(name, url, description=None, extra=None, is_public=None, is_protected=None)
Create a Job Binary.

delete(job_binary_id)
Delete a Job Binary.

get(job_binary_id)
Get information about a Job Binary.

get_file(job_binary_id)
Download a Job Binary.

list(search_opts=None)
Get a list of Job Binaries.

update(job_binary_id, data)
Update Job Binary.
Parameters
data (dict) -- dict that contains fields that should be updated with new values.

Fields that can be updated:
name
description
url
is_public
is_protected
extra - dict with user and password keyword arguments



Job ops

class saharaclient.api.jobs.JobsManager(api)
create(name, type, mains=None, libs=None, description=None, interface=None, is_public=None, is_protected=None)
Create a Job.

delete(job_id)
Delete a Job

get(job_id)
Get information about a Job

get_configs(job_type)
Get config hints for a specified Job type.

list(search_opts=None)
Get a list of Jobs.

update(job_id, name=NotUpdated, description=NotUpdated, is_public=NotUpdated, is_protected=NotUpdated)
Update a Job.


Job Execution ops

class saharaclient.api.job_executions.JobExecutionsManager(api)
create(job_id, cluster_id, input_id=None, output_id=None, configs=None, interface=None, is_public=None, is_protected=None)
Launch a Job.

delete(obj_id)
Delete a Job Execution.

get(obj_id)
Get information about a Job Execution.

list(search_opts=None)
Get a list of Job Executions.

update(obj_id, is_public=NotUpdated, is_protected=NotUpdated)
Update a Job Execution.


Job Types ops

class saharaclient.api.job_types.JobTypesManager(api)
list(search_opts=None)
Get a list of job types supported by plugins.


SAHARA CLI

The Sahara shell utility now is part of the OpenStackClient, so all shell commands take the following form:
$ openstack dataprocessing <command> [arguments...]


To get a list of all possible commands you can run:
$ openstack help dataprocessing


To get detailed help for the command you can run:
$ openstack help dataprocessing <command>


For more information about commands and their parameters you can refer to the Sahara CLI commands.
For more information about abilities and features of OpenStackClient CLI you can refer to OpenStackClient documentation

Configuration

The CLI is configured via environment variables and command-line options which are described in http://docs.openstack.org/developer/python-openstackclient/authentication.html.
Authentication using username/password is most commonly used and can be provided with environment variables:
export OS_AUTH_URL=<url-to-openstack-identity>
export OS_PROJECT_NAME=<project-name>
export OS_USERNAME=<username>
export OS_PASSWORD=<password>  # (optional)


or command-line options:
--os-auth-url <url>
--os-project-name <project-name>
--os-username <username>
[--os-password <password>]


Additionally sahara API url can be configured with parameter:
--os-data-processing-url


or with environment variable:
export OS_DATA_PROCESSING_URL=<url-to-sahara-API>


SAHARA CLI COMMANDS

The following commands are currently supported by the Sahara CLI:

Plugins

dataprocessing plugin configs get
Get plugin configs
usage: dataprocessing plugin configs get [-h] [--file <file>]
                                         <plugin> <version>


Positional arguments:
Name of the plugin to provide config information about
Version of the plugin to provide config information about

Options:
--file
Destination file (defaults to plugin name)


dataprocessing plugin list
Lists plugins
usage: dataprocessing plugin list [-h] [-f {csv,json,table,value,yaml}]
                                  [-c COLUMN] [--max-width <integer>]
                                  [--noindent]
                                  [--quote {all,minimal,none,nonnumeric}]
                                  [--long]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: csv, json, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--quote=nonnumeric
when to include quotes, defaults to nonnumeric
Possible choices: all, minimal, none, nonnumeric
--long=False
List additional fields in output


dataprocessing plugin show
Display plugin details
usage: dataprocessing plugin show [-h] [-f {json,shell,table,value,yaml}]
                                  [-c COLUMN] [--max-width <integer>]
                                  [--noindent] [--prefix PREFIX]
                                  [--version VERSION]
                                  <plugin>


Positional arguments:
Name of the plugin to display

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--version
Version of the plugin to display


Images

dataprocessing image tags add
Add image tags
usage: dataprocessing image tags add [-h] [-f {json,shell,table,value,yaml}]
                                     [-c COLUMN] [--max-width <integer>]
                                     [--noindent] [--prefix PREFIX] --tags
                                     <tag> [<tag> ...]
                                     <image>


Positional arguments:
Name or id of the image

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--tags
Tag(s) to add [REQUIRED]


dataprocessing image list
Lists registered images
usage: dataprocessing image list [-h] [-f {csv,json,table,value,yaml}]
                                 [-c COLUMN] [--max-width <integer>]
                                 [--noindent]
                                 [--quote {all,minimal,none,nonnumeric}]
                                 [--long] [--name <name-regex>]
                                 [--tags <tag> [<tag> ...]]
                                 [--username <username>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: csv, json, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--quote=nonnumeric
when to include quotes, defaults to nonnumeric
Possible choices: all, minimal, none, nonnumeric
--long=False
List additional fields in output
--name
Regular expression to match image name
--tags
List images with specific tag(s)
--username
List images with specific username


dataprocessing image register
Register an image
usage: dataprocessing image register [-h] [-f {json,shell,table,value,yaml}]
                                     [-c COLUMN] [--max-width <integer>]
                                     [--noindent] [--prefix PREFIX] --username
                                     <username> [--description <description>]
                                     <image>


Positional arguments:
Name or ID of the image to register

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--username
Username of privileged user in the image [REQUIRED]
--description
Description of the image. If not provided, description of the image will be reset to empty


dataprocessing image tags remove
Remove image tags
usage: dataprocessing image tags remove [-h]
                                        [-f {json,shell,table,value,yaml}]
                                        [-c COLUMN] [--max-width <integer>]
                                        [--noindent] [--prefix PREFIX]
                                        [--tags <tag> [<tag> ...] | --all]
                                        <image>


Positional arguments:
Name or id of the image

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--tags
Tag(s) to remove
--all=False
Remove all tags from image


dataprocessing image tags set
Set image tags (Replace current image tags with provided ones)
usage: dataprocessing image tags set [-h] [-f {json,shell,table,value,yaml}]
                                     [-c COLUMN] [--max-width <integer>]
                                     [--noindent] [--prefix PREFIX] --tags
                                     <tag> [<tag> ...]
                                     <image>


Positional arguments:
Name or id of the image

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--tags
Tag(s) to set [REQUIRED]


dataprocessing image show
Display image details
usage: dataprocessing image show [-h] [-f {json,shell,table,value,yaml}]
                                 [-c COLUMN] [--max-width <integer>]
                                 [--noindent] [--prefix PREFIX]
                                 <image>


Positional arguments:
Name or id of the image to display

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names


dataprocessing image unregister
Unregister image(s)
usage: dataprocessing image unregister [-h] <image> [<image> ...]


Positional arguments:
Name(s) or id(s) of the image(s) to unregister


Node Group Templates

dataprocessing node group template create
Creates node group template
usage: dataprocessing node group template create [-h]
                                                 [-f {json,shell,table,value,yaml}]
                                                 [-c COLUMN]
                                                 [--max-width <integer>]
                                                 [--noindent]
                                                 [--prefix PREFIX]
                                                 [--name <name>]
                                                 [--plugin <plugin>]
                                                 [--version <version>]
                                                 [--processes <processes> [<processes> ...]]
                                                 [--flavor <flavor>]
                                                 [--security-groups <security-groups> [<security-groups> ...]]
                                                 [--auto-security-group]
                                                 [--availability-zone <availability-zone>]
                                                 [--floating-ip-pool <floating-ip-pool>]
                                                 [--volumes-per-node <volumes-per-node>]
                                                 [--volumes-size <volumes-size>]
                                                 [--volumes-type <volumes-type>]
                                                 [--volumes-availability-zone <volumes-availability-zone>]
                                                 [--volumes-mount-prefix <volumes-mount-prefix>]
                                                 [--volumes-locality]
                                                 [--description <description>]
                                                 [--autoconfig]
                                                 [--proxy-gateway] [--public]
                                                 [--protected]
                                                 [--json <filename>]
                                                 [--shares <filename>]
                                                 [--configs <filename>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--name
Name of the node group template [REQUIRED if JSON is not provided]
--plugin
Name of the plugin [REQUIRED if JSON is not provided]
--version
Version of the plugin [REQUIRED if JSON is not provided]
--processes
List of the processes that will be launched on each instance [REQUIRED if JSON is not provided]
--flavor
Name or ID of the flavor [REQUIRED if JSON is not provided]
--security-groups
List of the security groups for the instances in this node group
--auto-security-group=False
Indicates if an additional security group should be created for the node group
--availability-zone
Name of the availability zone where instances will be created
--floating-ip-pool
ID of the floating IP pool
--volumes-per-node
Number of volumes attached to every node
--volumes-size
Size of volumes attached to node (GB). This parameter will be taken into account only if volumes-per-node is set and non-zero
--volumes-type
Type of the volumes. This parameter will be taken into account only if volumes-per-node is set and non-zero
--volumes-availability-zone
Name of the availability zone where volumes will be created. This parameter will be taken into account only if volumes-per-node is set and non-zero
--volumes-mount-prefix
Prefix for mount point directory. This parameter will be taken into account only if volumes-per-node is set and non-zero
--volumes-locality=False
If enabled, instance and attached volumes will be created on the same physical host. This parameter will be taken into account only if volumes-per-node is set and non-zero
--description
Description of the node group template
--autoconfig=False
If enabled, instances of the node group will be automatically configured
--proxy-gateway=False
If enabled, instances of the node group will be used to access other instances in the cluster
--public=False
Make the node group template public (Visible from other tenants)
--protected=False
Make the node group template protected
--json
JSON representation of the node group template. Other arguments will not be taken into account if this one is provided
--shares
JSON representation of the manila shares
--configs
JSON representation of the node group template configs


dataprocessing node group template delete
Deletes node group template
usage: dataprocessing node group template delete [-h]
                                                 <node-group-template>
                                                 [<node-group-template> ...]


Positional arguments:
Name(s) or id(s) of the node group template(s) to delete


dataprocessing node group template list
Lists node group templates
usage: dataprocessing node group template list [-h]
                                               [-f {csv,json,table,value,yaml}]
                                               [-c COLUMN]
                                               [--max-width <integer>]
                                               [--noindent]
                                               [--quote {all,minimal,none,nonnumeric}]
                                               [--long] [--plugin <plugin>]
                                               [--version <version>]
                                               [--name <name-substring>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: csv, json, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--quote=nonnumeric
when to include quotes, defaults to nonnumeric
Possible choices: all, minimal, none, nonnumeric
--long=False
List additional fields in output
--plugin
List node group templates for specific plugin
--version
List node group templates with specific version of the plugin
--name
List node group templates with specific substring in the name


dataprocessing node group template show
Display node group template details
usage: dataprocessing node group template show [-h]
                                               [-f {json,shell,table,value,yaml}]
                                               [-c COLUMN]
                                               [--max-width <integer>]
                                               [--noindent] [--prefix PREFIX]
                                               <node-group-template>


Positional arguments:
Name or id of the node group template to display

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names


dataprocessing node group template update
Updates node group template
usage: dataprocessing node group template update [-h]
                                                 [-f {json,shell,table,value,yaml}]
                                                 [-c COLUMN]
                                                 [--max-width <integer>]
                                                 [--noindent]
                                                 [--prefix PREFIX]
                                                 [--name <name>]
                                                 [--plugin <plugin>]
                                                 [--version <version>]
                                                 [--processes <processes> [<processes> ...]]
                                                 [--security-groups <security-groups> [<security-groups> ...]]
                                                 [--auto-security-group-enable | --auto-security-group-disable]
                                                 [--availability-zone <availability-zone>]
                                                 [--flavor <flavor>]
                                                 [--floating-ip-pool <floating-ip-pool>]
                                                 [--volumes-per-node <volumes-per-node>]
                                                 [--volumes-size <volumes-size>]
                                                 [--volumes-type <volumes-type>]
                                                 [--volumes-availability-zone <volumes-availability-zone>]
                                                 [--volumes-mount-prefix <volumes-mount-prefix>]
                                                 [--volumes-locality-enable | --volumes-locality-disable]
                                                 [--description <description>]
                                                 [--autoconfig-enable | --autoconfig-disable]
                                                 [--proxy-gateway-enable | --proxy-gateway-disable]
                                                 [--public | --private]
                                                 [--protected | --unprotected]
                                                 [--json <filename>]
                                                 [--shares <filename>]
                                                 [--configs <filename>]
                                                 <node-group-template>


Positional arguments:
Name or ID of the node group template

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--name
New name of the node group template
--plugin
Name of the plugin
--version
Version of the plugin
--processes
List of the processes that will be launched on each instance
--security-groups
List of the security groups for the instances in this node group
--auto-security-group-enable
Additional security group should be created for the node group
--auto-security-group-disable
Additional security group should not be created for the node group
--availability-zone
Name of the availability zone where instances will be created
--flavor
Name or ID of the flavor
--floating-ip-pool
ID of the floating IP pool
--volumes-per-node
Number of volumes attached to every node
--volumes-size
Size of volumes attached to node (GB). This parameter will be taken into account only if volumes-per-node is set and non-zero
--volumes-type
Type of the volumes. This parameter will be taken into account only if volumes-per-node is set and non-zero
--volumes-availability-zone
Name of the availability zone where volumes will be created. This parameter will be taken into account only if volumes-per-node is set and non-zero
--volumes-mount-prefix
Prefix for mount point directory. This parameter will be taken into account only if volumes-per-node is set and non-zero
--volumes-locality-enable
Instance and attached volumes will be created on the same physical host. This parameter will be taken into account only if volumes-per-node is set and non-zero
--volumes-locality-disable
Instance and attached volumes creation on the same physical host will not be regulated. This parameter will be takeninto account only if volumes-per-node is set and non-zero
--description
Description of the node group template
--autoconfig-enable
Instances of the node group will be automatically configured
--autoconfig-disable
Instances of the node group will not be automatically configured
--proxy-gateway-enable
Instances of the node group will be used to access other instances in the cluster
--proxy-gateway-disable
Instances of the node group will not be used to access other instances in the cluster
--public
Make the node group template public (Visible from other tenants)
--private
Make the node group template private (Visible only from this tenant)
--protected
Make the node group template protected
--unprotected
Make the node group template unprotected
--json
JSON representation of the node group template update fields. Other arguments will not be taken into account if this one is provided
--shares
JSON representation of the manila shares
--configs
JSON representation of the node group template configs


Cluster Templates

dataprocessing cluster template create
Creates cluster template
usage: dataprocessing cluster template create [-h]
                                              [-f {json,shell,table,value,yaml}]
                                              [-c COLUMN]
                                              [--max-width <integer>]
                                              [--noindent] [--prefix PREFIX]
                                              [--name <name>]
                                              [--node-groups <node-group:instances_count> [<node-group:instances_count> ...]]
                                              [--anti-affinity <anti-affinity> [<anti-affinity> ...]]
                                              [--description <description>]
                                              [--autoconfig] [--public]
                                              [--protected]
                                              [--json <filename>]
                                              [--shares <filename>]
                                              [--configs <filename>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--name
Name of the cluster template [REQUIRED if JSON is not provided]
--node-groups
List of the node groups(names or IDs) and numbers of instances for each one of them [REQUIRED if JSON is not provided]
--anti-affinity
List of processes that should be added to an anti-affinity group
--description
Description of the cluster template
--autoconfig=False
If enabled, instances of the cluster will be automatically configured
--public=False
Make the cluster template public (Visible from other tenants)
--protected=False
Make the cluster template protected
--json
JSON representation of the cluster template. Other arguments will not be taken into account if this one is provided
--shares
JSON representation of the manila shares
--configs
JSON representation of the cluster template configs


dataprocessing cluster template delete
Deletes cluster template
usage: dataprocessing cluster template delete [-h]
                                              <cluster-template>
                                              [<cluster-template> ...]


Positional arguments:
Name(s) or id(s) of the cluster template(s) to delete


dataprocessing cluster template list
Lists cluster templates
usage: dataprocessing cluster template list [-h]
                                            [-f {csv,json,table,value,yaml}]
                                            [-c COLUMN]
                                            [--max-width <integer>]
                                            [--noindent]
                                            [--quote {all,minimal,none,nonnumeric}]
                                            [--long] [--plugin <plugin>]
                                            [--version <version>]
                                            [--name <name-substring>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: csv, json, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--quote=nonnumeric
when to include quotes, defaults to nonnumeric
Possible choices: all, minimal, none, nonnumeric
--long=False
List additional fields in output
--plugin
List cluster templates for specific plugin
--version
List cluster templates with specific version of the plugin
--name
List cluster templates with specific substring in the name


dataprocessing cluster template show
Display cluster template details
usage: dataprocessing cluster template show [-h]
                                            [-f {json,shell,table,value,yaml}]
                                            [-c COLUMN]
                                            [--max-width <integer>]
                                            [--noindent] [--prefix PREFIX]
                                            <cluster-template>


Positional arguments:
Name or id of the cluster template to display

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names


dataprocessing cluster template update
Updates cluster template
usage: dataprocessing cluster template update [-h]
                                              [-f {json,shell,table,value,yaml}]
                                              [-c COLUMN]
                                              [--max-width <integer>]
                                              [--noindent] [--prefix PREFIX]
                                              [--name <name>]
                                              [--node-groups <node-group:instances_count> [<node-group:instances_count> ...]]
                                              [--anti-affinity <anti-affinity> [<anti-affinity> ...]]
                                              [--description <description>]
                                              [--autoconfig-enable | --autoconfig-disable]
                                              [--public | --private]
                                              [--protected | --unprotected]
                                              [--json <filename>]
                                              [--shares <filename>]
                                              [--configs <filename>]
                                              <cluster-template>


Positional arguments:
Name or ID of the cluster template [REQUIRED]

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--name
New name of the cluster template
--node-groups
List of the node groups(names or IDs) and numbers ofinstances for each one of them
--anti-affinity
List of processes that should be added to an anti-affinity group
--description
Description of the cluster template
--autoconfig-enable
Instances of the cluster will be automatically configured
--autoconfig-disable
Instances of the cluster will not be automatically configured
--public
Make the cluster template public (Visible from other tenants)
--private
Make the cluster template private (Visible only from this tenant)
--protected
Make the cluster template protected
--unprotected
Make the cluster template unprotected
--json
JSON representation of the cluster template. Other arguments will not be taken into account if this one is provided
--shares
JSON representation of the manila shares
--configs
JSON representation of the cluster template configs


Clusters

dataprocessing cluster create
Creates cluster
usage: dataprocessing cluster create [-h] [-f {json,shell,table,value,yaml}]
                                     [-c COLUMN] [--max-width <integer>]
                                     [--noindent] [--prefix PREFIX]
                                     [--name <name>]
                                     [--cluster-template <cluster-template>]
                                     [--image <image>]
                                     [--description <description>]
                                     [--user-keypair <keypair>]
                                     [--neutron-network <network>]
                                     [--count <count>] [--public]
                                     [--protected] [--transient]
                                     [--json <filename>] [--wait]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--name
Name of the cluster [REQUIRED if JSON is not provided]
--cluster-template
Cluster template name or ID [REQUIRED if JSON is not provided]
--image
Image that will be used for cluster deployment (Name or ID) [REQUIRED if JSON is not provided]
--description
Description of the cluster
--user-keypair
User keypair to get acces to VMs after cluster creation
--neutron-network
Instances of the cluster will get fixed IP addresses in this network. (Name or ID should be provided)
--count
Number of clusters to be created
--public=False
Make the cluster public (Visible from other tenants)
--protected=False
Make the cluster protected
--transient=False
Create transient cluster
--json
JSON representation of the cluster. Other arguments (except for --wait) will not be taken into account if this one is provided
--wait=False
Wait for the cluster creation to complete


dataprocessing cluster delete
Deletes cluster
usage: dataprocessing cluster delete [-h] [--wait] <cluster> [<cluster> ...]


Positional arguments:
Name(s) or id(s) of the cluster(s) to delete

Options:
--wait=False
Wait for the cluster(s) delete to complete


dataprocessing cluster list
Lists clusters
usage: dataprocessing cluster list [-h] [-f {csv,json,table,value,yaml}]
                                   [-c COLUMN] [--max-width <integer>]
                                   [--noindent]
                                   [--quote {all,minimal,none,nonnumeric}]
                                   [--long] [--plugin <plugin>]
                                   [--version <version>]
                                   [--name <name-substring>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: csv, json, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--quote=nonnumeric
when to include quotes, defaults to nonnumeric
Possible choices: all, minimal, none, nonnumeric
--long=False
List additional fields in output
--plugin
List clusters with specific plugin
--version
List clusters with specific version of the plugin
--name
List clusters with specific substring in the name


dataprocessing cluster scale
Scales cluster
usage: dataprocessing cluster scale [-h] [-f {json,shell,table,value,yaml}]
                                    [-c COLUMN] [--max-width <integer>]
                                    [--noindent] [--prefix PREFIX]
                                    [--instances <node-group-template:instances_count> [<node-group-template:instances_count> ...]]
                                    [--json <filename>] [--wait]
                                    <cluster>


Positional arguments:
Name or ID of the cluster

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--instances
Node group templates and number of their instances to be scale to [REQUIRED if JSON is not provided]
--json
JSON representation of the cluster scale object. Other arguments (except for --wait) will not be taken into account if this one is provided
--wait=False
Wait for the cluster scale to complete


dataprocessing cluster show
Display cluster details
usage: dataprocessing cluster show [-h] [-f {json,shell,table,value,yaml}]
                                   [-c COLUMN] [--max-width <integer>]
                                   [--noindent] [--prefix PREFIX]
                                   [--verification]
                                   <cluster>


Positional arguments:
Name or id of the cluster to display

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--verification=False
List additional fields for verifications


dataprocessing cluster verification
Updates cluster
usage: dataprocessing cluster verification [-h]
                                           [-f {json,shell,table,value,yaml}]
                                           [-c COLUMN] [--max-width <integer>]
                                           [--noindent] [--prefix PREFIX]
                                           [--name <name>]
                                           [--description <description>]
                                           [--shares <filename>]
                                           [--public | --private]
                                           [--protected | --unprotected]
                                           <cluster>


Positional arguments:
Name or ID of the cluster

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--name
New name of the cluster
--description
Description of the cluster
--shares
JSON representation of the manila shares
--public
Make the cluster public (Visible from other tenants)
--private
Make the cluster private (Visible only from this tenant)
--protected
Make the cluster protected
--unprotected
Make the cluster unprotected


dataprocessing cluster verification
Updates cluster verifications
usage: dataprocessing cluster verification [-h]
                                           [-f {json,shell,table,value,yaml}]
                                           [-c COLUMN] [--max-width <integer>]
                                           [--noindent] [--prefix PREFIX]
                                           (--start | --show)
                                           <cluster>


Positional arguments:
Name or ID of the cluster

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--start
Start health verification for the cluster
--show=False
Show health of the cluster


Data Sources

dataprocessing data source create
Creates data source
usage: dataprocessing data source create [-h]
                                         [-f {json,shell,table,value,yaml}]
                                         [-c COLUMN] [--max-width <integer>]
                                         [--noindent] [--prefix PREFIX] --type
                                         <type> --url <url>
                                         [--username <username>]
                                         [--password <password>]
                                         [--description <description>]
                                         [--public] [--protected]
                                         <name>


Positional arguments:
Name of the data source

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--type
Type of the data source (swift, hdfs, maprfs, manila) [REQUIRED]
Possible choices: swift, hdfs, maprfs, manila
--url
Url for the data source [REQUIRED]
--username
Username for accessing the data source url
--password
Password for accessing the data source url
--description
Description of the data source
--public=False
Make the data source public
--protected=False
Make the data source protected


dataprocessing data source delete
Delete data source
usage: dataprocessing data source delete [-h]
                                         <data-source> [<data-source> ...]


Positional arguments:
Name(s) or id(s) of the data source(s) to delete


dataprocessing data source list
Lists data sources
usage: dataprocessing data source list [-h] [-f {csv,json,table,value,yaml}]
                                       [-c COLUMN] [--max-width <integer>]
                                       [--noindent]
                                       [--quote {all,minimal,none,nonnumeric}]
                                       [--long] [--type <type>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: csv, json, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--quote=nonnumeric
when to include quotes, defaults to nonnumeric
Possible choices: all, minimal, none, nonnumeric
--long=False
List additional fields in output
--type
List data sources of specific type (swift, hdfs, maprfs, manila)
Possible choices: swift, hdfs, maprfs, manila


dataprocessing data source show
Display data source details
usage: dataprocessing data source show [-h] [-f {json,shell,table,value,yaml}]
                                       [-c COLUMN] [--max-width <integer>]
                                       [--noindent] [--prefix PREFIX]
                                       <data-source>


Positional arguments:
Name or id of the data source to display

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names


dataprocessing data source update
Update data source
usage: dataprocessing data source update [-h]
                                         [-f {json,shell,table,value,yaml}]
                                         [-c COLUMN] [--max-width <integer>]
                                         [--noindent] [--prefix PREFIX]
                                         [--name <name>] [--type <type>]
                                         [--url <url>] [--username <username>]
                                         [--password <password>]
                                         [--description <description>]
                                         [--public | --private]
                                         [--protected | --unprotected]
                                         <data-source>


Positional arguments:
Name or id of the data source

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--name
New name of the data source
--type
Type of the data source (swift, hdfs, maprfs, manila)
Possible choices: swift, hdfs, maprfs, manila
--url
Url for the data source
--username
Username for accessing the data source url
--password
Password for accessing the data source url
--description
Description of the data source
--public
Make the data source public (Visible from other tenants)
--private
Make the data source private (Visible only from this tenant)
--protected
Make the data source protected
--unprotected
Make the data source unprotected


Job Binaries

dataprocessing job binary create
Creates job binary
usage: dataprocessing job binary create [-h]
                                        [-f {json,shell,table,value,yaml}]
                                        [-c COLUMN] [--max-width <integer>]
                                        [--noindent] [--prefix PREFIX]
                                        [--name <name>]
                                        [--data <file> | --url <url>]
                                        [--description <description>]
                                        [--username <username>]
                                        [--password <password> | --password-prompt]
                                        [--public] [--protected]
                                        [--json <filename>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--name
Name of the job binary [REQUIRED if JSON is not provided]
--data
File that will be stored in the internal DB [REQUIRED if JSON and URL are not provided]
--url
URL for the job binary [REQUIRED if JSON and file are not provided]
--description
Description of the job binary
--username
Username for accessing the job binary URL
--password
Password for accessing the job binary URL
--password-prompt=False
Prompt interactively for password
--public=False
Make the job binary public
--protected=False
Make the job binary protected
--json
JSON representation of the job binary. Other arguments will not be taken into account if this one is provided


dataprocessing job binary delete
Deletes job binary
usage: dataprocessing job binary delete [-h] <job-binary> [<job-binary> ...]


Positional arguments:
Name(s) or id(s) of the job binary(ies) to delete


dataprocessing job binary download
Downloads job binary
usage: dataprocessing job binary download [-h] [--file <file>] <job-binary>


Positional arguments:
Name or ID of the job binary to download

Options:
--file
Destination file (defaults to job binary name)


dataprocessing job binary list
Lists job binaries
usage: dataprocessing job binary list [-h] [-f {csv,json,table,value,yaml}]
                                      [-c COLUMN] [--max-width <integer>]
                                      [--noindent]
                                      [--quote {all,minimal,none,nonnumeric}]
                                      [--long] [--name <name-substring>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: csv, json, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--quote=nonnumeric
when to include quotes, defaults to nonnumeric
Possible choices: all, minimal, none, nonnumeric
--long=False
List additional fields in output
--name
List job binaries with specific substring in the name


dataprocessing job binary show
Display job binary details
usage: dataprocessing job binary show [-h] [-f {json,shell,table,value,yaml}]
                                      [-c COLUMN] [--max-width <integer>]
                                      [--noindent] [--prefix PREFIX]
                                      <job-binary>


Positional arguments:
Name or ID of the job binary to display

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names


dataprocessing job binary update
Updates job binary
usage: dataprocessing job binary update [-h]
                                        [-f {json,shell,table,value,yaml}]
                                        [-c COLUMN] [--max-width <integer>]
                                        [--noindent] [--prefix PREFIX]
                                        [--name <name>] [--url <url>]
                                        [--description <description>]
                                        [--username <username>]
                                        [--password <password> | --password-prompt]
                                        [--public | --private]
                                        [--protected | --unprotected]
                                        [--json <filename>]
                                        <job-binary>


Positional arguments:
Name or ID of the job binary

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--name
New name of the job binary
--url
URL for the job binary [Internal DB URL can not be updated]
--description
Description of the job binary
--username
Username for accessing the job binary URL
--password
Password for accessing the job binary URL
--password-prompt=False
Prompt interactively for password
--public
Make the job binary public (Visible from other tenants)
--private
Make the job binary private (Visible only from this tenant)
--protected
Make the job binary protected
--unprotected
Make the job binary unprotected
--json
JSON representation of the update object. Other arguments will not be taken into account if this one is provided


Job Types

dataprocessing job type configs get
Get job type configs
usage: dataprocessing job type configs get [-h] [--file <file>] <job-type>


Positional arguments:
Type of the job to provide config information about
Possible choices: Hive, Java, MapReduce, Storm, Pig, Shell, MapReduce.Streaming, Spark

Options:
--file
Destination file (defaults to job type)


dataprocessing job type list
Lists job types supported by plugins
usage: dataprocessing job type list [-h] [-f {csv,json,table,value,yaml}]
                                    [-c COLUMN] [--max-width <integer>]
                                    [--noindent]
                                    [--quote {all,minimal,none,nonnumeric}]
                                    [--type <type>] [--plugin <plugin>]
                                    [--version <version>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: csv, json, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--quote=nonnumeric
when to include quotes, defaults to nonnumeric
Possible choices: all, minimal, none, nonnumeric
--type
Get information about specific job type
Possible choices: Hive, Java, MapReduce, Storm, Pig, Shell, MapReduce.Streaming, Spark
--plugin
Get only job types supported by this plugin
--version
Get only job types supported by specific version of the plugin. This parameter will be taken into account only if plugin is provided


Job Templates

dataprocessing job template create
Creates job template
usage: dataprocessing job template create [-h]
                                          [-f {json,shell,table,value,yaml}]
                                          [-c COLUMN] [--max-width <integer>]
                                          [--noindent] [--prefix PREFIX]
                                          [--name <name>] [--type <type>]
                                          [--mains <main> [<main> ...]]
                                          [--libs <lib> [<lib> ...]]
                                          [--description <description>]
                                          [--public] [--protected]
                                          [--interface <filename>]
                                          [--json <filename>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--name
Name of the job template [REQUIRED if JSON is not provided]
--type
Type of the job (Hive, Java, MapReduce, Storm, Pig, Shell, MapReduce.Streaming, Spark) [REQUIRED if JSON is not provided]
Possible choices: Hive, Java, MapReduce, Storm, Pig, Shell, MapReduce.Streaming, Spark
--mains
Name(s) or ID(s) for job's main job binary(s)
--libs
Name(s) or ID(s) for job's lib job binary(s)
--description
Description of the job template
--public=False
Make the job template public
--protected=False
Make the job template protected
--interface
JSON representation of the interface
--json
JSON representation of the job template


dataprocessing job template delete
Deletes job template
usage: dataprocessing job template delete [-h]
                                          <job-template> [<job-template> ...]


Positional arguments:
Name(s) or id(s) of the job template(s) to delete


dataprocessing job template list
Lists job templates
usage: dataprocessing job template list [-h] [-f {csv,json,table,value,yaml}]
                                        [-c COLUMN] [--max-width <integer>]
                                        [--noindent]
                                        [--quote {all,minimal,none,nonnumeric}]
                                        [--long] [--type <type>]
                                        [--name <name-substring>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: csv, json, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--quote=nonnumeric
when to include quotes, defaults to nonnumeric
Possible choices: all, minimal, none, nonnumeric
--long=False
List additional fields in output
--type
List job templates of specific type
Possible choices: Hive, Java, MapReduce, Storm, Pig, Shell, MapReduce.Streaming, Spark
--name
List job templates with specific substring in the name


dataprocessing job template show
Display job template details
usage: dataprocessing job template show [-h]
                                        [-f {json,shell,table,value,yaml}]
                                        [-c COLUMN] [--max-width <integer>]
                                        [--noindent] [--prefix PREFIX]
                                        <job-template>


Positional arguments:
Name or ID of the job template to display

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names


dataprocessing job template update
Updates job template
usage: dataprocessing job template update [-h]
                                          [-f {json,shell,table,value,yaml}]
                                          [-c COLUMN] [--max-width <integer>]
                                          [--noindent] [--prefix PREFIX]
                                          [--name <name>]
                                          [--description <description>]
                                          [--public | --private]
                                          [--protected | --unprotected]
                                          <job-template>


Positional arguments:
Name or ID of the job template

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--name
New name of the job template
--description
Description of the job template
--public
Make the job template public (Visible from other tenants)
--private
Make the job_template private (Visible only from this tenant)
--protected
Make the job template protected
--unprotected
Make the job template unprotected


Jobs

dataprocessing job binary delete
Deletes job
usage: dataprocessing job binary delete [-h] [--wait] <job> [<job> ...]


Positional arguments:
ID(s) of the job(s) to delete

Options:
--wait=False
Wait for the job(s) delete to complete


dataprocessing job execute
Executes job
usage: dataprocessing job execute [-h] [-f {json,shell,table,value,yaml}]
                                  [-c COLUMN] [--max-width <integer>]
                                  [--noindent] [--prefix PREFIX]
                                  [--job-template <job-template>]
                                  [--cluster <cluster>] [--input <input>]
                                  [--output <output>]
                                  [--params <name:value> [<name:value> ...]]
                                  [--args <argument> [<argument> ...]]
                                  [--public] [--protected]
                                  [--config-json <filename> | --configs <name:value> [<name:value> ...]]
                                  [--interface <filename>] [--json <filename>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--job-template
Name or ID of the job template [REQUIRED if JSON is not provided]
--cluster
Name or ID of the cluster [REQUIRED if JSON is not provided]
--input
Name or ID of the input data source
--output
Name or ID of the output data source
--params
Parameters to add to the job
--args
Arguments to add to the job
--public=False
Make the job public
--protected=False
Make the job protected
--config-json
JSON representation of the job configs
--configs
Configs to add to the job
--interface
JSON representation of the interface
--json
JSON representation of the job. Other arguments will not be taken into account if this one is provided


dataprocessing job list
Lists jobs
usage: dataprocessing job list [-h] [-f {csv,json,table,value,yaml}]
                               [-c COLUMN] [--max-width <integer>]
                               [--noindent]
                               [--quote {all,minimal,none,nonnumeric}]
                               [--long] [--status <status>]


Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: csv, json, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--quote=nonnumeric
when to include quotes, defaults to nonnumeric
Possible choices: all, minimal, none, nonnumeric
--long=False
List additional fields in output
--status
List jobs with specific status
Possible choices: done-with-error, failed, killed, pending, running, succeeded, to-be-killed


dataprocessing job binary show
Display job details
usage: dataprocessing job binary show [-h] [-f {json,shell,table,value,yaml}]
                                      [-c COLUMN] [--max-width <integer>]
                                      [--noindent] [--prefix PREFIX]
                                      <job>


Positional arguments:
ID of the job to display

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names


dataprocessing job binary update
Updates job
usage: dataprocessing job binary update [-h]
                                        [-f {json,shell,table,value,yaml}]
                                        [-c COLUMN] [--max-width <integer>]
                                        [--noindent] [--prefix PREFIX]
                                        [--public | --private]
                                        [--protected | --unprotected]
                                        <job>


Positional arguments:
ID of the job to update

Options:
-f=table, --format=table
the output format, defaults to table
Possible choices: json, shell, table, value, yaml
-c=[], --column=[]
specify the column(s) to include, can be repeated
--max-width=0
Maximum display width, 0 to disable
--noindent=False
whether to disable indenting the JSON
--prefix=
add a prefix to all variable names
--public
Make the job public (Visible from other tenants)
--private
Make the job private (Visible only from this tenant)
--protected
Make the job protected
--unprotected
Make the job unprotected


HOW TO PARTICIPATE

Getting started

Create account on Github (if you don't have one)
Make sure that your local git is properly configured by executing git config --list. If not, configure user.name, user.email

Create account on Launchpad (if you don't have one)
Subscribe to OpenStack general mail-list
Subscribe to OpenStack development mail-list
Create OpenStack profile
Login to OpenStack Gerrit with your Launchpad id
Sign OpenStack Individual Contributor License Agreement
Make sure that your email is listed in identities

Subscribe to code-reviews. Go to your settings on http://review.openstack.org
Go to watched projects
Add openstack/sahara, openstack/sahara-dashboard, openstack/sahara-extra, openstack/python-saharaclient, openstack/sahara-image-elements, openstack/horizon


How to stay in touch with the community?

If you have something to discuss use OpenStack development mail-list. Prefix mail subject with [Sahara]
Join #openstack-sahara IRC channel on freenode
Join public weekly meetings on Thursdays at 18:00 UTC on #openstack-meeting-alt IRC channel
Join public weekly meetings on Thursdays at 14:00 UTC on #openstack-meeting-3 IRC channel

How to send your first patch on review?

Checkout Sahara code from Github
Carefully read https://wiki.openstack.org/wiki/Gerrit_Workflow

Apply and commit your changes
Make sure that your code passes PEP8 checks and unit-tests
Send your patch on review
Monitor status of your patch review on https://review.openstack.org/#/

Code is hosted in review.o.o and mirrored to github and git.o.o . Submit bugs to the Sahara project on launchpad and to the Sahara client on launchpad_client. Submit code to the openstack/python-saharaclient project using gerrit.

AUTHOR

OpenStack Foundation

COPYRIGHT

2013, OpenStack Foundation
May 30, 2016 0.14.0