.TH arc.conf 5 "2018-11-26" "NorduGrid ARC 5.4.3" "NorduGrid ARC" .SH NAME arc.conf \- ARC configuration .SH SYNOPSIS .B /etc/arc.conf .B ${ARC_LOCATION}/etc/arc.conf .SH DESCRIPTION ARC has two separate configuration files - one for client tools and another for services. This document describes the services configuration file. For client configuration please see "ARC Clients User Manual" at http://www.nordugrid.org/documents/arc-ui.pdf ARC configuration uses a plain-text "ini-style" format. It is also possible to use an XML format, however that is outside the scope of this document. The configuration file consists of several configuration blocks. Each configuration block is identified by a keyword and contains the configuration options for a specific part of the ARC middleware. Each configuration block starts with its identifying keyword inside square brackets. Thereafter follows one or more attribute value pairs written one on each line in the following format (note that the attribute names are CASE-SENSITIVE): .nf .B [keyword1] .BR attribute1 ="value1" .BR attribute2 ="value2" .B [keyword2] .BR attribute ="value" .fi If the ARC_LOCATION environment variable is set the ARC configuration file located at ${ARC_LOCATION}/etc/arc.conf is read first. If this file is not present or the relevant configuration information is not found in this file, the file at /etc/arc.conf is read. .SH The [common] block The parameters set within this block are available for all the other blocks. These are the configuration parameters shared by the different components of ARC (e.g. grid-manager, infosys) .TP .B hostname hostname - the FQDN of the frontend node, optional in the common block but MUST be set in the cluster block .IR Example: .br hostname="myhost.org" .TP .B x509_voms_dir x509_voms_dir path - the path to the directory containing *.lsc files needed for checking validity of VOMS extensions. If not specified default value /etc/grid-security/vomsdir is used. .IR Example: .br x509_voms_dir="/etc/grid-security/vomsdir" .TP .B lrms ARC supports various LRMS flavours, as listed in this section. For detailed description of options please refer to ARC CE sysadmin guide: http://www.nordugrid.org/documents/arc-ce-sysadm-guide.pdf .B ONLY ONE LRMS IS ALLOWED. MULTIPLE lrms ENTRIES WILL TRIGGER UNEXPECTED BEHAVIOUR. .B lrms sets the type of the Local Resource Management System (queue system), and optionally - the default queue name, separated with a blank space: .B lrmstype queue_name. For .B lrmstype, the following systems are supported and can be chosen (one per server): fork - simple forking of jobs to the same node as the server sge - (Sun/Oracle) Grid Engine condor - Condor pbs - PBS lsf - LSF ll - LoadLeveler slurm - SLURM dgbridge - Desktop Grid PBS has many flavours, ARC currenly supports OpenPBS, PBSPro, ScalablePBS and Torque (the official name for ScalablePBS). There is no need to specify the flavour or the version number of the PBS, simply write 'pbs'. Similarly, there is no need to specify (Sun/Oracle) Grid Engine versions and flavours. "lrmstype" MUST be set here, it is a MANDATORY parameter! The optional .B "queue" parameter specifies the default Grid queue of the LRMS. Jobs will be submitted to this queue if they do not specify queue name in job description. Queue name must match one of the [queue/queue_name] block labels, see below. .IR Example: .br lrms="pbs gridlong" .br lrms="pbs" .SH PBS options .TP .B pbs_bin_path the path to the qstat,pbsnodes,qmgr etc PBS binaries, no need to set if PBS is not used .IR Example: .br pbs_bin_path="/usr/bin" .TP .B pbs_log_path the path of the PBS server logfiles which are used by A-REX to determine whether a PBS job is completed. If not specified, A-REX will use qstat for that. .IR Example: .br pbs_log_path="/var/spool/pbs/server_logs" .br .SH Condor options .TP .B condor_rank condor_rank - If you are not happy with the way Condor picks nodes when running jobs, you can define your own ranking algorithm by optionally setting the condor_rank attribute. condor_rank should be set to a ClassAd float expression that you could use in the Rank attribute in a Condor job description. Obviously no need to set if Condor is not used. An example: .IR Example: .br condor_rank="(1-LoadAvg/2)*(1-LoadAvg/2)*Memory/1000*KFlops/1000000" .TP .B condor_bin_path condor_bin_path - Path to Condor binaries. Must be set if Condor is used. .IR Example: .br condor_bin_path=/opt/condor/bin .br .TP .B condor_config condor_config - Path to Condor config file. Must be set if Condor is used and the config file is not in its default location (/etc/condor/condir_config or ~/condor/condor_config). The full path to the file should be given. .IR Example: .br condor_config=/opt/condor/etc/condor_config .br .SH SGE options .TP .B sge_bin_path sge_bin_path - Path to Sun Grid Engine (SGE) binaries, MUST be set if SGE is the LRMS used .IR Example: .br sge_bin_path="/opt/n1ge6/bin/lx24-x86" .TP .B sge_root sge_root - Path to SGE installation directory. MUST be set if SGE is used. .IR Example: .br sge_root="/opt/n1ge6" .TP .B sge_cell sge_cell - The name of the SGE cell to use. This option is only necessary in case SGE is set up with a cell name different from 'default' .IR Example: .br sge_cell="default" .TP .B sge_qmaster_port sge_qmaster_port, sge_execd_port - these options should be used in case SGE command line clients require SGE_QMASTER_PORT and SGE_EXECD_PORT environment variables to be set. Usually they are not necessary. .IR Example: .br sge_qmaster_port="536" .br sge_execd_port="537" .SH SLURM options .TP .B slurm_bin_path slurm_bin_path - Path to SLURM binaries, must be set if installed outside of normal $PATH .IR Example: .br slurm_bin_path="/usr/bin" .TP .B slurm_wakeupperiod How long should infosys wait before querying SLURM for new data (seconds) .IR Example: .br slurm_wakeupperiod="15" .TP .B slurm_use_sacct Should ARC use sacct instead of scontrol to get information on finished jobs. Requires that accounting is turned on in SLURM. Default is "no". .IR Example: .br slurm_use_sacct="yes" .SH LSF options .TP .B lsf_bin_path the PATH to LSF bin folder no need to set if LSF is not used .IR Example: .br lsf_bin_path="/usr/local/lsf/bin/" .TP .B lsf_profile_path the PATH to profile.lsf no need to set if LSF is not used .IR Example: .br lsf_profile_path="/usr/share/lsf/conf" .br .SH LL options .TP .B ll_bin_path the PATH to the LoadLeveler bin folder no need to set if LoadLeveler is not used .IR Example: .br ll_bin_path="/opt/ibmll/LoadL/full/bin" .TP .B ll_consumable_resources support for a LoadLeveler setup using Consumable Resources no need to set if LoadLeveler is not used .IR Example: .br ll_consumable_resources="yes" .SH Desktop Grid options .TP .B dgbridge_stage_dir Desktop Bridge www publish dir .IR Example: .br dgbridge_stage_dir="/var/www/DGBridge" .TP .B dgbridge_stage_prepend Desktop Bridge URL prefix pointing to dgbridge_stage_dir .IR Example: .br dgbridge_stage_prepend="http://edgi-bridge.example.com/DGBridge/" .SH Boinc options .TP .B boinc_db_host boinc_db_port boinc_db_name boinc_db_user boinc_db_pass Connection details for the Boinc database. .IR Example: .br boinc_db_host="localhost" .br boinc_db_port="3306" .br boinc_db_name="myproject" .br boinc_db_user="boinc" .br boinc_db_pass="password" .TP .B boinc_app_id = id ID of the app handled by this CE. Setting this option makes database queries much faster in large projects with many apps. .IR Example: .br boinc_app_id="1" .SH Other [common] options .TP .B globus_tcp_port_range globus_tcp_port_range, globus_udp_port_range - Firewall configuration In a firewalled environment the software which uses GSI needs to know what ports are available. The full documentation can be found at: http://dev.globus.org/wiki/FirewallHowTo These variable are similar to the Globus environment variables: GLOBUS_TCP_PORT_RANGE and GLOBUS_UDP_PORT_RANGE. These variables are not limited to [common], but can be set individually for each service in corresponding section: [grid-manager], [gridftpd] Example: .IR Example: .br globus_tcp_port_range="9000,12000" .br globus_udp_port_range="9000,12000" .TP .B x509_user_key x509_user_cert, x509_user_key - Server credentials location. These variables are similar to the GSI environment variables: X509_USER_KEY and X509_USER_CERT These variables are not limited to [common], but can be set individually for each service in corresponding section: [grid-manager], [gridftpd], [nordugridmap] .IR Example: .br x509_user_key="/etc/grid-security/hostkey.pem" .br x509_user_cert="/etc/grid-security/hostcert.pem" .TP .B x509_cert_dir x509_cert_dir - Location of trusted CA certificates This variable is similar to the GSI environment variable: X509_CERT_DIR This variable is not limited to [common], but can be set individually for each service in corresponding section: [grid-manager], [gridftpd] .IR Example: .br x509_cert_dir="/etc/grid-security/certificates" .TP .B gridmap gridmap - The gridmap file location This variable is similar to the GSI environment variable: GRIDMAP This variable is not limited to [common], but can be set individually for each service in corresponding section: [grid-manager], [gridftpd] The default is /etc/grid-security/grid-mapfile .IR Example: .br gridmap="/etc/grid-security/grid-mapfile" .TP .B voms_processing voms_processing - Defines how to behave if errors in VOMS AC processing detected. relaxed - use everything that passed validation. standard - same as relaxed but fail if parsing errors took place and VOMS extension is marked as critical. This is the default. strict - fail if any parsing error was discovered. noerrors - fail if any parsing or validation error happened. This command can also be used in [grid-manager] and [gridftpd] blocks. .IR Example: .br voms_processing="standard" .TP .B voms_trust_chain voms_trust_chain - Define the DN chain that the host services trust when the VOMS AC from peer VOMS proxy certificate is parsed and validated. There can be multiple "voms_trust_chain" existing, each one corresponds to a VOMS server. This variable is similar to the information in *.lsc file, but with two differences: 1, You don't need to create a *.lsc file per VOMS server, but create a chain per VOMS server; 2, Regular expressions are supported when matching the DNs. This variable is not limited to [common], but can be used in [grid-manager] and [gridftpd] blocks. This variable should be used together with voms_processing. This variable will overwrite the information in *.lsc if *.lsc exists. .nf .IR Example: .br voms_trust_chain = "/O=Grid/O=NorduGrid/CN=host/arthur.hep.lu.se" "/O=Grid/O=NorduGrid/CN=NorduGrid Certification Authority" .br voms_trust_chain = "/O=Grid/O=NorduGrid/CN=host/emi-arc.eu" "/O=Grid/O=NorduGrid/CN=NorduGrid Certification Authority" .br voms_trust_chain = "^/O=Grid/O=NorduGrid" .fi .TP .B enable_perflog_reporting enable_perflog_reporting expert-debug-on/no - Switch on or off performance reporting. Default is no. Only switch on if you specifically need it, and are aware of the possible local root exploit due to permissive directory. .IR Example: .br enable_perflog_reporting="expert-debug-on" .TP .B perflogdir perflogdir logdir - Directory where performance logs should be stored. Default is /var/log/arc/perflogs .IR Example: .br perflogdir="/var/log/arc/perflogs" .SH [vo] block [vo] block is used to define VOs and generate mapfiles from user list maintained by VO databases. VO block is a configuration block for the nordugridmap utility. Please note that [vo] block processing by nordugridmap utility depend on parameters defined in the [nordugridmap] block. [vo] block by itself does not affect authorization of client/user. For that label defined by vo="" attribute may be used in [group] block with 'vo' rule. Also mapfiles generated by nordugridmap utility can be used with 'file' rule. .TP .B id id blockid - specifies the unique configuration block id (this does not affect nordugridmap utility) .IR Example: .br id="vo_1" .TP .B vo vo vo_name - specifies the VO name, this name can be used in other blocks. MUST be given. .IR Example: .br vo="nordugrid" .TP .B file file path - output gridmap-file where GENERATED mapping list will be stored. See parameters below to define how to generate this file. If the same file specified as output for different [vo] blocks, nordugridmap will automatically merge entries in given blocks order. Default is '/etc/grid-security/gridmapfile'. .IR Example: .br file="/etc/grid-security/VOs/atlas-users" .TP .B source source URL - the URL of the VO database which is assigned to this VO. The nordugridmap will use this URL to automatically generate and keep up-to-date userlist (mapfile) specified by the 'file' attribute. URL is a multivalued attribute, several sources can be specified for the [vo] block and all the users from those sources will be merged into the same file. The source URLs are processed in the given order. Currently supported URL types are: http(s):// - URL to plain text file. File should contain a list of DNs with optional issuer certificate authority DN (see require_issuerdn): "user DN" ["issuer DN"] voms(s):// - URL to VOMS-Admin interface nordugrid - add NorduGrid VO members ldap:// - expect LDAP-schema formatted VO Group file:// - local file (stand-alone or dynamically generated by nordugridmap). File should contain a list of DNs with optional mapped unixid: "user DN" [mapped user ID] Result of optional mapped unixid processing depend on mapuser_processing option settings. vo:// - reference to another [vo] configuration block edg-mkgridmap:// - local configuration file used by edg-mkgridmap tool. nordugridmap will parse configuration from file and process it as additional [vo] block that will be referred authomatically in place URL specified. This allow easy migration from edg-mkgridmap solution without rewriting your previous configuration (NOTE that rarely used 'auth' directive and 'AUTO' mapping options are not supported) You can use either vo:// or file:// entries to specify dependencies between [vo] blocks, but using vo:// is a recommended way. For each separate source URL it is possible to override some parameters value. You can use the following syntax to perform this: source="URL < parameter1=value1 parameter2=value2" You can override the following parameters: mapped_unixid for http(s),voms(s),ldap and file URLs cache_enable for http(s),voms(s),ldap and file URLs voms_method for voms(s) URLs mapuser_processing for file URLs with mapped_unixid='' overrides (control mapped_unixid overriding behaviour for URL) .IR Example: .br source="vomss://voms.ndgf.org:8443/voms/nordugrid.org" .br source="vomss://lcg-voms.cern.ch:8443/voms/atlas?/atlas/Role=VO-Admin < mapped_unixid=atlasadmin" .br source="vomss://kuiken.nikhef.nl:8443/voms/gin.ggf.org < voms_method=get" .br source="http://www.nordugrid.org/developers.dn" .br source="ldap://grid-vo.nikhef.nl/ou=lcg1,o=atlas,dc=eu-datagrid,dc=org" .br source="file:///etc/grid-security/priviliged_users.dn" .br source="vo://nordugrid_community" .br source="nordugrid" .TP .B mapped_unixid mapped_unixid unixid - the local UNIXID which is used in the generated grid-mapfile by the nordugridmap utility. If any of the sources have already provided mapping information (file:// or vo://) behaviour depends on 'mapuser_processing' [nordugridmap] block configuration: mapuser_processing = 'overwrite': ignore already provided mapping and apply mapped_unixid for all sources mapuser_processing = 'keep': apply mapped_unixid only for sources that does not already has mapping information [vo] block can only have one UNIXID. If 'mapped_unixid' is not specified behaviour depends on 'allow_empty_unixid' [nordugridmap] block configuration value: allow_empty_unixid = 'yes': empty value will be used for mapped_unixid which means that nordugridmap will generate only the list of DNs without mapping (consider using mapuser_processing='overwrite' along with this option or sources that does not provide previously defined mapping information) allow_empty_unixid = 'no': skip users without mapping information (if no mapping information provided by sources) .IR Example: .br mapped_unixid="gridtest" .TP .B voms_fqan_map voms_fqan_map fqan unixid - the local UNIXID which is used to map voms(s) sources with specific FQAN given. Several voms_fqan_map can be specified for a [vo] block. For each voms(s) sources in [vo] block and every voms_fqan_map record separate source record will be authomatically generated with mapped_unixid overrided to specified one. Sources are generated in a given voms_fqan_map order. Original voms(s) source URL are processed LAST. This allows to simplify configuration, especially in redundancy cases when several VOMS servers are used for the same VO. .IR Example: .br voms_fqan_map="/atlas/Role=VO-Admin atlasadmin" .br voms_fqan_map="/atlas/Role=production atlasprod" .TP .B require_issuerdn require_issuerdn yes/no - another nordugridmap option. YES would map only those DNs obtained from the URLs which have the corresponding public CA packages installed. Default is 'no'. Note, that some sources does not provide issuer information (like voms(s):// or file://). If this sources are used within [vo] block and require_issuerdn is set to 'yes' behaviour depends on issuer_processing [nordugridmap] block configuration: issuer_processing = 'relaxed': check only those records that have issuer information provided, allow other sources issuer_processing = 'strict': if issuer information was not found record is filtered and will not be passed into mapfile .IR Example: .br require_issuerdn="no" .TP .B filter filter ACL string - An ACL filter for the nordugridmap utility. Multiple allow/deny statements are possible. The fetched DNs are filtered against the specified rules before they are added to the generated mapfile. * can be used as a wildcard. You may run the nordugridmap with the --test command line option to see how the filters you specified work. If at least one allow filter is specified implicit deny is used at the end of ACL. If only deny filters are present - implicit allow used at the end. .IR Example: .br filter="deny *infn*" .br filter="allow *NorduGrid*" .SH [group] Authorisation block These configuration blocks define rules used to define to which authorization group a user belongs. The group should not be mistaken for a virtual organisation (VO). A group may match a single vo if only a single check (rule) on vo membership is performed. It is however more common to allow multiple VOs in a single group. ARC also allows many other ways to assign users to groups. Technically, permissions are only granted to groups, not directly to VOs. The block specifies single authorization group. There may be multiple [group] blocks in configuration defining multiple authorization groups. The block can be specified in two ways - either using [group/group1] like subblock declaration per group or just [group]. The two formats are equivalent. Every block (till the beginning of next block or the end of the file) defines one authorization group. .B IMPORTANT: Rules in a group are processed in their order of appearance. The first matching rule decides the membership of a the user to a group and the processing STOPS. There are positively and negatively matching rules. If a rule is matched positively then the user tested is accepted into the respective group and further processing is stopped. Upon a negative match the user would be rejected for that group - processing stops too. The sign of rule is determined by prepending the rule with '+' (for positive) or '-' (for negative) signs. '+' is default and can be omitted. A rule may also be prepended with '!' to invert result of rule, which will let the rule match the complement of users. That complement operator ('!') may be combined with the operator for positive or negative matching. A group MUST be defined before it may be used. In this respect the arc.conf is ORDER SENSITIVE. The authorization groups can be used in [gridftpd] and in its sub-blocks. The syntax of their specification varies with the service they are used for. For using authorization groups and VO blocks in HED framework please read "Security Framework of ARC" at http://www.nordugrid.org/documents/arc-security-documentation.pdf .TP .B name name group_name - Specify name of group. If there is no such command in block, name of subblock is used instead (that is what subblocks are used for). For example [group/users]. .IR Example: .br name="users" .TP .B subject subject certificate_subject - Rule to match specific subject of user's X.509 certificate. No masks, patterns and regular expressions are allowed. For more information about X.509 refer to http://www.wikipedia.org/wiki/X509 .IR Example: .br subject="/O=Grid/O=Big VO/CN=Main Boss" .TP .B file file path - Start reading rules from another file. That file has a bit different format. It can't contain blocks and commands are separated from arguments by space. Also word "subject" in subject command may be skipped. That makes it convenient to directly add gridmap-like lists to authorization group. .IR Example: .br file="/etc/grid-security/local_users" .TP .B voms voms vo group role capabilities - Match VOMS attribute in user's credential. Use '*' to match any value. More information about VOMS can be found at http://grid-auth.infn.it .IR Example: .br voms="nordugrid /nordugrid/Guests * *" .TP .B group group group_name [group_name ...] - Match user already belonging to one of specified groups. Groups refered here must be defined earlier in configuration file. Multiple group names may be specified for this rule. That allows creating hierarchical structure of authorization groups like 'clients' are those which are 'users' and 'admins'. .IR Example: .br group="local_admins" .TP .B plugin plugin timeout path [argument ...] - Run external executable or function from shared library. Rule is matched if plugin returns 0. In arguments following substitutions are supported: %D - subject of certificate %P - path to proxy For more about plugins read documentation. .IR Example: .br plugin="10 /opt/external/bin/permis %P" .TP .B lcas lcas library directory database - Call LCAS functions to check rule. Here library is path to shared library of LCAS, either absolute or relative to directory; directory is path to LCAS installation directory, equivalent of LCAS_DIR variable; database is path to LCAS database, equivalent to LCAS_DB_FILE variable. Each arguments except library is optional and may be either skipped or replaced with ’*’. .IR Example: .br lcas="" .TP .B remote remote URL ... - Check user's credentials against remote service. Only DN groups stored at LDAP directories are supported. Multiple URLs are allowed in this rule. .IR Example: .br remote="ldap://grid-vo.nordugrid.org/ou=People,dc=nordugrid,dc=org" .TP .B vo vo vo_name ... - Match user belonging to VO specified by "vo=vo_name" as configured in one of PREVIOUSLY defined [vo] blocks. Multiple VO names are allowed for this rule. .IR Example: .br vo="nordugrid" .TP .B all all - Matches any user identity. This command requires no arguments but still can be written as all="" or all= for consistency. .IR Example: .br all="" .SH The [grid-manager] block The [grid-manager] block configures the part of A-REX service hosted in .B arched taking care of the grid tasks on the frontend (stagein/stageout, LRMS job submission, caching, etc..). Name of this block is historical and comes from times then this functionality was handled by separate process called grid-manager. This section also configures WS interfaces of A-REX service also hosted by same container. .TP .B controldir controldir path - The directory of the A-REX's internal job log files, not needed on the nodes. .IR Example: .br controldir="/var/spool/nordugrid/jobstatus" .TP .B sessiondir sessiondir path [drain] - the directory which holds the sessiondirs of the grid jobs. Multiple session directories may be specified by specifying multiple sessiondir commands. In this case jobs are spread evenly over the session directories. If sessiondir="*" is set, the session directory will be spread over the ${HOME}/.jobs directories of every locally mapped unix user. It is preferred to use common session directories. The path may be followed by "drain", in which case no new jobs will be assigned to that sessiondir, but current jobs will still be processed and accessible. .IR Example: .br sessiondir="/scratch/grid" .br sessiondir="/mnt/grid drain" .TP .B runtimedir runtimedir path - The directory which holds the runtimeenvironment scripts, should be available on the nodes as well! The runtimeenvironments are automatically detected and advertised in the information system. .IR Example: .br runtimedir="/SOFTWARE/runtime" .TP .B scratchdir scratchdir path - path on computing node to move session directory to before execution. If defined should contain the path to the directory on the computing node which can be used to store a jobs' files during execution. Sets the environment variable RUNTIME_LOCAL_SCRATCH_DIR. Default is not to move session directory before execution. .IR Example: .br scratchdir="/local/scratch/" .TP .B shared_scratch shared_scratch path - path on frontend where scratchdir can be found. If defined should contain the path corresponding to that set in scratchdir as seen on the frontend machine. Sets the environment variable RUNTIME_FRONTEND_SEES_NODE. .IR Example: .br shared_scratch="/mnt/scratch" .TP .B nodename nodename path - command to obtain hostname of computing node. .IR Example: .br nodename="/bin/hostname" .TP .B cachedir cachedir cache_path [link_path] - specifies a directory to store cached data. Multiple cache directories may be specified by specifying multiple cachedir commands. Cached data will be distributed evenly over the caches. Specifying no cachedir command or commands with an empty path disables caching. Optional link_path specifies the path at which the cache_path is accessible on computing nodes, if it is different from the path on the A-REX host. Example: cache="/shared/cache /frontend/jobcache" If "link-path" is set to '.' files are not soft-linked, but copied to session directory. If a cache directory needs to be drained, then cachedir should specify "drain" as the link path, in which case no new files will be added to the cache. .IR Example: .br cachedir="/scratch/cache" .br cachedir="/fs1/cache drain" .TP .B remotecachedir remotecachedir cache_path [link_path] - specifies caches which are under the control of other A-REXs, but which this A-REX can have read-only access to. Multiple remote cache directories may be specified by specifying multiple remotecachedir commands. If a file is not available in paths specified by cachedir, A-REX looks in remote caches. link_path has the same meaning as in cachedir, but the special path ``replicate'' means files will be replicated from remote caches to local caches when they are requested. .IR Example: .br remotecachedir="/mnt/fs1/cache replicate" .TP .B cachesize cachesize max min - specifies high and low watermarks for space used by cache, as a percentage of the space on the file system on which the cache directory is located. When the max is exceeded, files will be deleted to bring the used space down to the min level. It is a good idea to have the cache on its own separate file system. To turn off this feature "cachesize" without parameters can be specified. .IR Example: .br cachesize="80 70" .TP .B cachelifetime If cache cleaning is enabled, files accessed less recently than the given time period will be deleted. Example values of this option are 1800, 90s, 24h, 30d. When no suffix is given the unit is seconds. .IR Example: .br cachelifetime="30d" .TP .B cacheshared cacheshared yes|no - specifies whether the caches share a filesystem with other data. If set to yes then cache-clean calculates the size of the cache instead of using filesystem used space. .IR Example: .br cacheshared="yes" .TP .B cachespacetool cachespacetool path [options] - specifies an alternative tool to "df" that cache-clean should use to obtain space information on the cache file system. The output of this command must be "total_bytes used_bytes". The cache directory is passed as the last argument to this command. .IR Example: .br cachespacetool="/etc/getspace.sh" .TP .B cachelogfile cachelogfile path - specifies the filename where output of the cache-clean tool should be logged. Defaults to /var/log/arc/cache-clean.log. .IR Example: .br cachelogfile="/tmp/cache-clean.log" .TP .B cacheloglevel cacheloglevel level - specifies the level of logging by the cache-clean tool, between 0 (FATAL) and 5 (DEBUG). Defaults to 3 (INFO). .IR Example: .br cacheloglevel="4" .TP .B cachecleantimeout cachecleantimeout time - the timeout in seconds for running the cache-clean tool. If using a large cache or slow file system this value can be increased to allow the cleaning to complete. Defaults to 3600 (1 hour). .IR Example: .br cachecleantimeout="10000" .TP .B cacheaccess cacheaccess rule - rules for allowing access to files in the cache remotely through the A-REX web interface. A rule has three parts: 1. Regular expression defining a URL pattern 2. Credential attribute to match against a client's credential 3. Regular expression defining a credential value to match against a client's credential A client is allowed to access the cached file if a URL pattern matches the cached file URL and the client's credential has the attribute and matches the value required for that pattern. Possible values for credential attribute are dn, voms:vo, voms:role and voms:group. Remote cache access requires that the A-REX web interface is enabled via arex_mount_point. .IR Examples: .br cacheaccess="gsiftp://host.org/private/data/.* voms:vo myvo:production" .br cacheaccess="gsiftp://host.org/private/data/ng/.* dn /O=Grid/O=NorduGrid/.*" .TP .B enable_cache_service enable_cache_service yes|no - Turn on or off the cache service interface. If turned on the cache service must be installed and the A-REX WS interface must be enabled via arex_mount_point. The interface is accessible at the same host and port as given inn arex_mount_point with path /cacheservice. Default is off. .IR Example: .br enable_cache_service="yes" .TP .B user user user[:group] - Switch to a non root user/group after startup. Use with caution. .IR Example: .br user="grid" .TP .B debug debug debuglevel - Set debug level of the arched daemon hosting A-REX service, between 0 (FATAL) and 5 (DEBUG). Defaults to 3 (INFO). .IR Example: .br debug="2" .TP .B logfile logfile path - Specify log file location. If using an external log rotation tool be careful to make sure it matches the path specified here. Default log file is "/var/log/arc/grid-manager.log" .IR Example: .br logfile="/var/log/arc/grid-manager.log" .TP .B wslogfile wslogfile path - Specify log file location for WS-interface operations. This file is only created if the WS-interface is enabled through the arex_mount_point option. The logsize, logreopen and debug options also apply to this file. If using an external log rotation tool be careful to make sure it matches the path specified here. It is possible to specify the same file as logfile to combine the logs. Default is /var/log/arc/ws-interface.log. .IR Example: .br wslogfile="/var/log/arc/ws-interface.log" .TP .B logsize logsize size [number] - 'Size' specifies in bytes how big log file is allowed to grow (approximately). If log file exceeds specified size it is renamed into logfile.0. And logfile.0 is renamed into logfile.1, etc. up to 'number' logfiles. Don't set logsize if you don't want to enable the ARC logrotation because another logrotation tool is used. .IR Example: .br logsize="100000 2" .TP .B logreopen logreopen yes|no - Specifies if log file must be closed after each record is added. By default arched keeps log file open. This option can be used to make behaviour of arched compatible with external log rotation utilities. .IR Example: .br logreopen="no" .TP .B pidfile pidfile path - Specify location of file containing PID of daemon process. This is useful for automatic start/stop scripts. .IR Example: .br pidfile="/var/run/arched-arex.pid" .TP .B gnu_time the gnu time command, default /usr/bin/time .IR Example: .br gnu_time="/usr/bin/time" .TP .B shared_filesystem if computing node can access session directory at frontend, defaults to 'yes' .IR Example: .br shared_filesystem="yes" .TP .B mail specifies the email address from where the notification mails are sent, .IR Example: .br mail="grid.support@somewhere.org" .TP .B joblog joblog path - specifies where to store specialized log about started and finished jobs. If path is empty or no such command - log is not written. This log is not used by any other part of ARC, so keep it disabled unless needed. .IR Example: .br joblog="/var/log/arc/gm-jobs.log" .TP .B jobreport jobreport [URL ...] [timeout] - tells to report all started and finished jobs to logger service at 'URL'. Multiple URLs and multiple jobreport commands are allowed. In that case the job info will be sent to all of them. Timeout specifies how long (in days) to try to pass information before give up. Suggested value is 30 days. .IR Example: .br jobreport="https://grid.uio.no:8001/logger" .TP .B jobreport_publisher jobreport publisher - name of the accounting records publisher. .IR Example: .br jobreport_publisher="jura" .TP .B jobreport_credentials jobreport credentials path [key_file [cert_file [ca_dir]]] - specifies the credentials for accessing the accounting service. .IR Example: .br jobreport_credentials="/etc/grid-security/hostkey.pem /etc/grid-security/hostcert.pem /etc/grid-security/certificates" .TP .B jobreport_options jobreport options [name:value, ...]- specifies additional parameters for the jobreporter. .IR Example: .br jobreport_options="urbatch:50,archiving:/tmp/archive,topic:/topic/global.accounting.cpu.central" .TP .B jobreport_logfile jobreport logfile - name of the file to store stderr of the publisher executable. .IR Example: .br jobreport_logfile="/var/log/arc/jura.log" .TP .B max_job_control_requests max_job_control_requests number - max number of simultaneously processed job management requests over WS interface - like job submission, cancel, status check etc. Default value is 100. .IR Example: .br max_job_control_requests="100" .TP .B max_infosys_requests max_infosys_requests number - max number of simultaneously processed resource info requests over WS interface. Default value is 1. .IR Example: .br max_infosys_requests="1" .TP .B max_data_transfer_requests max_data_transfer_requests number - max number of simultaneously processed data transfer requests over WS interface - like data staging. Default value is 100. .IR Example: .br max_data_transfer_requests="100" .TP .B maxjobs maxjobs number1 number2 number3 number4 - specifies maximum allowed number of jobs. number1 - jobs which are not in FINISHED state (jobs tracked in RAM) number2 - jobs being run (SUBMITTING, INLRMS states) number3 - jobs being processed per DN number4 - jobs in whole system number5 - LRMS scripts limit (jobs in SUBMITTING and CANCELING) Missing number or -1 means no limit. .IR Example: .br maxjobs="10000 10 2000" .TP .B wakeupperiod wakeupperiod time - specifies how often A-REX cheks for new jobs arrived, job state change requests, etc. That is resposivity of A-REX. 'time' is time period in seconds. Default is 3 minutes. Usually this command is not needed because important state changes are also trigering out-of-schedule checks. NOTE: This parameter does not affect responsivity of backend scripts - especially scan-*-job. That means that upper estimation of time for detecting job finished executing is sum of responsivity of backend script + wakeupperiod. .IR Example: .br wakeupperiod="180" .TP .B defaultttl defaultttl [ttl [ttr]] - ttl is the time in seconds for how long a session directory will survive after job execution has finished. If not specified the default is 1 week. ttr is how long information about a job will be kept after the session directory is deleted. If not specified, the ttr default is one month. .IR Example: .br defaultttl="259200" .TP .B authplugin authplugin state options plugin_path - Every time job goes to 'state' run 'plugin_path' executable. Options consist of key=value pairs separated by ','. Possible keys are timeout - wait for result no longer that 'value' seconds (timeout= can be omitted). onsuccess,onfailure,ontimeout - what to do if plugin exited with exit code 0, not 0, timeout achieved. Possible actions are: pass - continue executing job, fail - cancel job, log - write to log fail about problem and continue executing job. .IR Example: .br authplugin="ACCEPTED timeout=10 /usr/libexec/arc/bank %C/job.%I.local %S" .TP .B authplugin ARC is distributed with the plugin "inputcheck". Its purpose is to check if input files requested in job's RSL are accessible from this machine. It is better to run it before job enters cluster. It accepts 2 arguments: names of files containing RSL and credentials' proxy. This plugin is only guaranteed to work for job submitted through the legacy GridFTP interface, as this is the only interface for which credentials in the form of proxy certificate files are guaranteed to exist. .IR Example: .br authplugin="ACCEPTED 60 /usr/libexec/arc/inputcheck %C/job.%I.description %C/job.%I.proxy" .TP .B authplugin ARC is distributed with the plugin "arc-vomsac-check". Its purpose is to enforce per-queue access policies based on VOMS attributes present in user's proxy-certificate. Plugin should be run before job enters the cluster. It requires 2 argments: path to job information .local file and path to credentials file. Enforced per-queue access policies are configured with 'ac_policy' option in the [queue/name] configuration block. .IR Example: .br authplugin="ACCEPTED 60 /usr/libexec/arc/arc-vomsac-check -L %C/job.%I.local -P %C/job.%I.proxy" .TP .B localcred localcred timeout plugin_path - Every time an external executable is run this plugin will be called. Its purpose is to set non-unix permissions/credentials on running tasks. Note: the process itself can still be run under the root account. If plugin_path looks like somename@somepath, then function 'somename' from the shared library located at 'somepath' will be called (timeout is not effective in that case). A-REX must be run as root to use this option. Comment it out unless you really know what you are doing. .IR Example: .br localcred="0 acquire@/opt/nordugrid/lib/afs.so %C/job.%I.proxy" .TP .B norootpower norootpower yes|no - if set to yes, all job management proccesses will switch to mapped user's identity while accessing session directory. This is useful if session directory is on NFS root squashing turned on. Default is no. .IR Example: .br norootpower="yes" .TP .B allowsubmit allowsubmit [group ...] - list of authorization groups of users allowed to submit new jobs while "allownew=no" is active in jobplugin configuration. Multiple commands are allowed. .IR Example: .br allowsubmit="mygroup" .br allowsubmit="yourgroup" .TP .B helper helper user executable arguments - associates an external program with A-REX. This program will be kept running under the account of the user specified by username. Currently only ’.’ is supported as username, corresponding to the user running A-REX. Every time this executable finishes it will be started again. This helper plugin mechanism can be used as an alternative to /etc/init.d or cron to (re)start external processes. .IR Example: .br helper=". /usr/local/bin/myutility" .TP .B tmpdir tmpdir - used by the A-REX, default is /tmp .IR Example: .br tmpdir="/tmp" .TP .B maxrerun maxrerun - specifies how many times job can be rerun if it failed in LRMS. Default value is 5. This is only an upper limit, the actual rerun value is set by the user in his xrsl. .IR Example: .br maxrerun="5" .TP .B globus_tcp_port_range globus_tcp_port_range, globus_udp_port_range - Firewall configuration. .IR Example: .br globus_tcp_port_range="9000,12000" .br globus_udp_port_range="9000,12000" .TP .B x509_user_key x509_user_cert, x509_user_key - Location of credentials for service. These may be used by any module or external utility which need to contact another service not on behalf of user who submited job. .IR Example: .br x509_user_key="/etc/grid-security/hostkey.pem" .br x509_user_cert="/etc/grid-security/hostcert.pem" .TP .B x509_cert_dir x509_cert_dir - Location of trusted CA certificates .IR Example: .br x509_cert_dir="/etc/grid-security/certificates" .TP .B http_proxy http_proxy - http proxy server location .IR Example: .br http_proxy="proxy.mydomain.org:3128" .TP .B fixdirectories fixdirectories yes|missing|no - specifies during startup A-REX should create all directories needed for it operation and set suitable default permissions. If "no" is specified then A-REX does nothing to prepare its operational environment. In case of "missing" A-REX only creates and sets permissions for directories which are not present yet. For "yes" all directories are created and permisisons for all used directories are set to default safe values. Default behaviour is as if "yes" is specified. .IR Example: .br fixdirectories="yes" .TP .B arex_mount_point arex_mount_point - enables web services interfaces, including job execution and information system. The argument is an https URL defining the endpoint port and path: https://:/ In order to submit job a client must specify the exact published path. Make sure the chosen port is not blocked by firewall or other security rules. .IR Example: .br arex_mount_point="https://piff.hep.lu.se:443/arex" .TP .B enable_arc_interface enable_arc_interface yes|no - turns on or off the ARC own WS interface based on OGSA BES and WSRF. If enabled the interface can be accessed at the URL specified by arex_mount_point (this option must also be specified). Default is yes. .IR Example: .br enable_arc_interface="yes" .TP .B enable_emies_interface enable_emies_interface - enable the EMI Execution Service interface. If enabled the interface can be accessed at the URL specified in arex_mount_point (this option must also be specified) .IR Example: .br enable_emies_interface="yes" .TP .B arguspep_endpoint arguspep_endpoint - specifies URL of Argus PEPD service (by default, the argus pepd service runs on port 8154 with path /authz) to use for authorization and user mapping. It is worth to mention that "requireClientCertAuthentication" (default is false) item of pepd.ini (configuration of Argus PEPD service) is set to be 'true', then https should be used, otherwise http is proper. If specified Argus is contacted for every operation requested through WS interface (see arex_mount_point). .IR Example: .br arguspep_endpoint="https://somehost.somedomain:8154/authz" .TP .B arguspep_profile arguspep_profile - defines which communication profile to use while communicationg with Argus PEPD service. Possible values are: direct - pass all authorization attributes (only for debugging) subject - pass only subject name of client cream - makes A-REX pretend it is gLite CREAM service. This is recommended profile for interoperability with gLite. emi - new profile devloped in EMI project. This is default option. .IR Example: .br arguspep_profile="cream" .TP .B arguspep_usermap arguspep_usermap - specifies either response from Argus servie may define mapping of client to local account. Possible values are 'yes' and 'no'. Default is 'no'. Argus is contacted after all other user mapping is performed. Hence it can overwrite all other decisions. .IR Example: .br arguspep_usermap="no" .TP .B arguspdp_endpoint arguspdp_endpoint - specifies URL of Argus PDP service (by default, the argus pepd service runs on port 8152 with path /authz) to use for authorization and user mapping. It is worth to mention that "requireClientCertAuthentication" (default is false) item of pdp.ini (configuration of Argus PDP service) is set to be 'true', then https should be used, otherwise http is proper. If specified Argus is contacted for every operation requested through WS interface (see arex_mount_point). .IR Example: .br arguspdp_endpoint="https://somehost.somedomain:8152/authz" .TP .B arguspdp_profile arguspdp_profile - defines which communication profile to use while communicationg with Argus PDP service. Possible values are: subject - pass only subject name of client cream - makes A-REX pretend it is gLite CREAM service. This is recommended profile for interoperability with gLite. emi - new profile devloped in EMI project. This is default option. .IR Example: .br arguspdp_profile="cream" .TP .B arguspdp_acceptnotapplicable arguspdp_accpetnotapplicable - specify if the "NotApplicable" decision returned by Argus PDP service is treated as reason to deny request. Default is no, which treats "NotApplicable" as reson to deny request. .IR Example: .br arguspdp_acceptnotapplicable="no" .TP .B watchdog watchdog - specifies if additinal watchdog processes is spawned to restart main process if it is stuck or dies. Possible values are 'yes' and 'no'. Default is 'no'. .IR Example: .br watchdog="no" .TP .B groupcfg groupcfg group_name [group_name ...] - specifies authorization groups for grid-manager to accept. The main location of this parameter is inside [gridftpd/jobs] block. The 'groupcfg' located here is only effective if computing service is configured without GridFTP interface and hence [gridftpd/jobs] block is missing. .IR Example: .br groupcfg="users" .TP .B unixmap unixgroup unixvo unixmap [unixname][:unixgroup] rule - more sophisticated mapping to local account .br unixgroup group rule - more sophisticated mapping to local account for specific authorization groups. .br unixvo vo rule - more sophisticated mapping to local account for users belonging to specified VO. .br The main location for these parameters is [gridftpd] section. If located here they are only active if computing service is configured without GridFTP interface and hence [gridftpd/jobs] block is missing. For more detailed information see section [gridftpd] and read "ARC Computing Element. System Administrator guide" manual. .IR Example: .br unixmap="nobody:nogroup all" .br unixgroup="users simplepool /etc/grid-security/pool/users" .br unixvo="ATLAS unixuser atlas:atlas" .TP .B allowunknown allowunknown yes|no - check user subject against grid-mapfile. The main location for this parameter is [gridftpd] section. If located here it is only active if computing service is configured without GridFTP interface and hence [gridftpd/jobs] block is missing. For more detailed information see section [gridftpd]. .IR Example: .br allowunknown="no" .TP .B delegationdb delegationdb db_name - specify which DB to use to store delegations. Currently supported db_names are bdb and sqlite. Default is bdb. .IR Example: .br delegationdb="bdb" .TP .B forcedefaultvoms forcedefaultvoms VOMS_FQAN - specify VOMS FQAN which user will be assigned if his/her credentials contain no VOMS attributes. To assign different values to different queues put this command into [queue] block. .IR Example: .br forcedefaultvoms="/vo/group/subgroup" .SH [data-staging] block [data-staging] block configures DTR data staging parameters. .TP .B debug debug - Log level for transfer logging in job.id.errors files, between 0 (FATAL) and 5 (DEBUG). Default is to use value set by debug option in [grid-manager] section. .IR Example: .br debug="4" .TP .B maxdelivery maxdelivery - Maximum number of concurrent file transfers, i.e. active transfers using network bandwidth. This is the total number for the whole system including any remote staging hosts. Default is 10. .IR Example: .br maxdelivery="40" .TP .B maxprocessor maxprocessor - Maximum number of concurrent files in each pre- and post- processing state, eg cache check or replica resolution. Default is 10. .IR Example: .br maxprocessor="20" .TP .B maxemergency maxemergency - Maximum "emergency" slots which can be assigned to transfer shares when all slots up to the limits configured by the above two options are used by other shares. This ensures shares cannot be blocked by others. Default is 1. .IR Example: .br maxemergency="5" .TP .B maxprepared maxprepared - Maximum number of files in a prepared state, i.e. pinned on a remote storage such as SRM for transfer. A good value is a small multiple of maxdelivery. Default is 200. .IR Example: .br maxprepared="250" .TP .B sharetype sharetype - Scheme to assign transfer shares. Possible values are dn, voms:vo, voms:role and voms:group. .IR Example: .br sharetype="voms:role" .TP .B definedshare definedshare - Defines a share with a fixed priority, different from the default (50). Priority is an integer between 1 (lowest) and 100 (highest). .IR Example: .br definedshare="myvo:production 80" .br definedshare="myvo:student 20" .TP .B dtrlog dtrlog - A file in which data staging state information (for monitoring and recovery purposes) is periodically dumped. Default is controldir/dtrstate.log .IR Example: .br dtrlog="/tmp/dtrstate.log" .TP .B central_logfile central_logfile - A file in which all data staging messages from every job will be logged (in addition to their job.id.errors files). If this option is not present or the path is empty the log file is not created. Note this file is not automatically controlled by logrotate. .IR Example: .br central_logfile="/var/log/arc/datastaging.log" .TP .B deliveryservice The following 4 options are used to configure multi-host data staging. deliveryservice - URL to a data delivery service which can perform remote data staging .IR Example: .br deliveryservice="https://myhost.org:60003/datadeliveryservice" .TP .B localdelivery localdelivery - If any deliveryservice is defined, this option determines whether local data transfer is also performed. Default is no. .IR Example: .br localdelivery="yes" .TP .B remotesizelimit remotesizelimit - Lower limit on file size (in bytes) of files that remote hosts should transfer. Can be used to increase performance by transferring small files using local processes. .IR Example: .br remotesizelimit="100000" .TP .B usehostcert usehostcert - Whether the A-REX host certificate should be used for communication with remote hosts instead of the users' proxies. Default is no. .IR Example: .br usehostcert="yes" .TP .B acix_endpoint acix_endpoint URL - the ARC Cache Index specified here will be queried for every input file specified in a job description and any replicas found in sites with accessible caches will be added to the replica list of the input file. The replicas will be tried in the order specified by .B preferredpattern. .IR Example: .br acix_endpoint="https://cacheindex.ndgf.org:6443/data/index" .TP .B securetransfer securetransfer yes|no - if data connection allows to choose use secure|non-secure data transfer. Currently only works for gridftp. default is no .IR Example: .br securetransfer="no" .TP .B passivetransfer passivetransfer yes|no - If yes, gridftp transfers are passive. Setting this option to yes can solve transfer problems caused by firewalls. default is no .IR Example: .br passivetransfer="no" .TP .B httpgetpartial httpgetpartial yes|no - If yes, HTTP GET transfers may transfer data in chunks/parts. If no - data is always transfered in one piece. Default is yes. .IR Example: .br httpgetpartial="yes" .TP .B speedcontrol speedcontrol min_speed min_time min_average_speed max_inactivity - specifies how slow data transfer must be to trigger error. Tranfer is canceled if speed is below min_speed bytes per second for at least min_time seconds, or if average rate is below min_average_speed bytes per second, or no data was transfered for longer than max_inactivity seconds. Value of zero turns feature off. Default is "0 300 0 300" .IR Example: .br speedcontrol="0 300 0 300" .TP .B preferredpattern preferredpattern pattern - specifies a preferred pattern on which to sort multiple replicas of an input file. It consists of one or more patterns separated by a pipe character (|) listed in order of preference. Replicas will be ordered by the earliest match. If the dollar character ($) is used at the end of a pattern, the pattern will be matched to the end of the hostname of the replica. If an exclamation mark (!) is used at the beginning of a pattern, any replicas matching the pattern will be excluded from the sorted replicas. .IR Example: .br preferredpattern="srm://myhost.ac.uk|.uk$|ndgf.org$|!badhost.org$" .TP .B copyurl copyurl url_head local_path - specifies that URLs, starting from 'url_head' should be accessed in a different way (most probaly unix open). The 'url_head' part of the URL will be replaced with 'local_path' and file from obtained path will be copied to the session directory. NOTE: 'local_path' can also be of URL type. you can have several copyurl lines .IR Example: .br copyurl="gsiftp://example.org:2811/data/ gsiftp://example.org/data/" .br copyurl="gsiftp://example2.org:2811/data/ gsiftp://example2.org/data/" .TP .B linkurl linkurl url_head local_path [node_path] - identical to 'copyurl', only file won't be copied, but soft-link will be created. The 'local_path' specifies the way to access the file from the gatekeeper, and is used to check permissions. The 'node_path' specifies how the file can be accessed from computing nodes, and will be used for soft-link creation. If 'node_path' is missing - 'local_path' will be used. you can have multiple linkurl settings .IR Example: .br linkurl="gsiftp://somewhere.org/data /data" .br linkurl="gsiftp://example.org:2811/data/ /scratch/data/" .TP .B maxtransfertries maxtransfertries - the maximum number of times download and upload will be attempted per job (retries are only performed if an error is judged to be temporary) .IR Example: .br maxtransfertries="10" .SH [gridftpd] block The .B [gridftpd] block configures the gridftpd server .TP .B user user user[:group] - Switch to a non root user/group after startup WARNING: Make sure that the certificate files are owned by the user/group specified by this option. Default value is root. .IR Example: .br user="grid" .TP .B debug debug debuglevel - Set debug level of the gridftpd daemon, between 0 (FATAL) and 5 (DEBUG). Default is 3 (INFO). .IR Example: .br debug="2" .TP .B daemon daemon yes|no - Whether GFS is run in daemon mode. Default is yes. .IR Example: .br daemon="yes" .TP .B logfile logfile path - Set logfile location .IR Example: .br logfile="/var/log/arc/gridftpd.log" .TP .B logsize logsize size [number] - 'Size' specifies in bytes how big log file is allowed to grow (approximately). If log file exceeds specified size it is renamed into logfile.0. And logfile.0 is renamed into logfile.1, etc. up to 'number' logfiles. Don't set logsize if you don't want to enable the ARC logrotation because another logrotation tool is used. .IR Example: .br logsize="100000 2" .TP .B pidfile pidfile path - Specify location of file containig PID of daemon process. This is useful for automatic star/stop scripts. .IR Example: .br pidfile="/var/run/gridftpd.pid" .TP .B port port bindport - Port to listen on (default 2811) .IR Example: .br port="2811" .TP .B pluginpath pluginpath - directory where the plugin libraries are installed, default is $ARC_LOCATION/lib(64)/arc .IR Example: .br pluginpath="/usr/lib/arc/" .TP .B encryption encryption yes|no - should data encryption be allowed, default is no, encryption is very heavy .IR Example: .br encryption="no" .TP .B include include - Include contents of another configuration file. .IR Example: .br include="path" .TP .B allowunknown allowunknown yes|no - if no, check user subject against grid-mapfile and reject if missing. By default unknown (not in the grid-mapfile) grid users are rejected .IR Example: .br allowunknown="no" .TP .B allowactivedata yes|no - if no, only passive data transfer is allowed. By default both passive and active data transfers are allowed. .IR Example .br allowactivedata="yes" .TP .B maxconnections maxconnections - maximum number of connections accepted by a gridftpd server. Default is 100. .IR Example: .br maxconnections="200" .TP .B defaultbuffer defaultbuffer size - defines size of every buffer for data reading/writing. Default is 65536. The actual value may decrease if the cumulative size of all buffers exceeds value specified by maxbuffer. .IR Example: .br defaultbuffer="65536" .TP .B maxbuffer maxbuffer size - defines maximal amount of memory in bytes to be allocated for all data reading/writing buffers. Default is 640kB. The number of buffers is (max {3, min {41, 2P + 1}}), where P is the parallelism level requested by the client. Hence, even without parallel streams enabled number of buffers will be 3. .IR Example: .br maxbuffer="655360" .TP .B globus_tcp_port_range globus_tcp_port_range, globus_udp_port_range - Firewall configuration .IR Example: .br globus_tcp_port_range="9000,12000" .br globus_udp_port_range="9000,12000" .TP .B firewall firewall - hostname or IP addres to use in response to PASV command instead of IP address of a network interface of computer. .IR Example: .br firewall="hostname" .TP .B x509_user_key x509_user_cert, x509_user_key - Server credentials location .IR Example: .br x509_user_key="/etc/grid-security/hostkey.pem" .br x509_user_cert="/etc/grid-security/hostcert.pem" .TP .B x509_cert_dir x509_cert_dir - Location of trusted CA certificates .IR Example: .br x509_cert_dir="/etc/grid-security/certificates" .TP .B gridmap gridmap - The gridmap file location The default is /etc/grid-security/grid-mapfile .IR Example: .br gridmap="/etc/grid-security/grid-mapfile" .TP .B unixmap unixmap [unixname][:unixgroup] rule - more sophisticated way to map Grid identity of client to local account. If client matches 'rule' it's assigned specified unix identity or one generated by rule. Mapping commands are processed sequentially and processing stops at first successful one (like in [group] section). For possible rules read "ARC Computing Element. System Administrator guide" manual. All rules defined in [group] section canbe used. There are also additional rules which produce not only yes/no result but also give back user and group names to which mapping should happen. The way it works is quite complex so it is better to read full documentation. For safety reasons if sophisticated mapping is used it is better to finish mapping sequence with default mapping to nonexistent or safe account. .IR Example: .br unixmap="nobody:nogroup all" .TP .B unixgroup unixgroup group rule - do mapping only for users belonging to specified authorization 'group'. It is similar to an additional filter for unixmap command which filters out all users not belonging to specified authorization group. Only rules which generate unix user and group names may be used in this command. Please read "ARC Computing Element System Administrator Guide" for more information. .IR Example: .br unixgroup="users simplepool /etc/grid-security/pool/users" .TP .B unixvo unixvo vo rule - do mapping only for users belonging to specified VO. Only rules which generate unix identity name may be used in this command. Please read "ARC Computing Element. System Administrator Guide" for more information. This command is similar to 'unixgroup' described above and exists for convenience for setups which base mapping on VOs users belong to. .IR Example: .br unixvo="ATLAS unixuser atlas:atlas" .SH [gridftpd/filedir] block [gridftpd/filedir] "fileplugin" storage block subblock for "exporting" a directory using the gridftpd's fileplugin plugin. gridftp plugins are shared libraries. "filedir" is a unique label. The access control is set by using the "dir" configuration option .TP .B plugin plugin name - specifies name of shared library to be loaded relative to "pluginpath". The next line is MUST for a gridftp file server with "fileplugin", don't change anything .IR Example: .br plugin="fileplugin.so" .TP .B groupcfg groupcfg group_name [group_name ...] - specifies authorization groups for which this plugin is activated. In case groupcfg is not used the plugin is loaded for every mapped grid user. Multiple names were may be specified delimited by blank space. Group names are as specified in [group] sections. .IR Example: .br groupcfg="users" .TP .B path the name of the virtual directory served by the gridftp server, REQUIRED the exported storage area is accessible as gsiftp://my_server/topdir. "topdir" is just an example, call the virtual path anything you like, even "/" is a valid choice. .IR Example: .br path="/topdir" .TP .B mount the physical directory corresponding to the virtual one: gsiftp://my_server/topdir will give access to the /scratch/grid directory on my_server, REQUIRED .IR Example: .br mount="/scratch/grid" .TP .B dir dir - this is the access control parameter, you can have several "dir" lines controlling different directories within then same block dir path options - specifies access rules for accessing files in 'path' (relative to virtual and real path) and all the files and directories below. 'options' are: nouser - do not use local file system rights, only use those specifies in this line owner - check only file owner access rights group - check only group access rights other - check only "others" access rights if none of the above specified usual unix access rights are applied. read - allow reading files delete - allow deleting files append - allow appending files (does not allow creation) overwrite - allow overwriting already existing files (does not allow creation, file attributes are not changed) dirlist - allow obtaining list of the files cd - allow to make this directory current create owner:group permissions_or:permissions_and - allow creating new files. File will be owned by 'owner' and owning group will be 'group'. If '*' is used, the user/group to which connected user is mapped will be used. The permissions will be set to permissions_or & permissions_and. (second number is reserved for the future usage). mkdir owner:group permissions_or:permissions_and - allow creating new directories. .IR Example: .br Set permissions on mounted directory: .br dir="/ nouser read cd dirlist delete create *:* 664:664 mkdir *:* 775:775" .IR Example: .br Adjust permissions on some subdirectories: .br dir="/section1 nouser read mkdir *:* 700:700 cd dirlist" .br dir="/section2 nouser read mkdir *:* 700:700 cd dirlist" .SH [gridftpd/jobs] subblock [gridftpd/jobs] subblock which creates the jobsubmission interface, using the jobplugin of the gridftpd service. gridftp plugins are shared libraries. 'jobs' is a unique label. .TP .B path the path to the virtual gridftpd directory which is used during the job submission. MUST be set. .IR Example: .br path="/jobs" .TP .B plugin plugin name - specifies name of shared library to be loaded relative to "pluginpath". The next line is MUST for a job submission service via gridftpd "jobplugin", don't change anything! .IR Example: .br plugin="jobplugin.so" .TP .B groupcfg groupcfg group_name [group_name ...] - specifies authorization groups for which this plugin is activated. In case groupcfg is not used the plugin is loaded for every mapped grid user. .IR Example: .br groupcfg="users" .TP .B allownew The 'allownew' configuration parameter sets if the grid resource accepts submission of new jobs. This parameter can be used to close down a grid. The default is yes .IR Example: .br allownew="yes" .TP .B remotegmdirs remotegmdirs controldir sessiondir - Specifies control and session directories to which jobs can be submitted but which are under the control of another A-REX. The corresponding controldir and sessiondir parameters must be defined in another A-REX's configuration. Multiple remotegmdirs can be specified. .IR Example: .br remotegmdirs="/mnt/host1/control /mnt/host1/session" .TP .B maxjobdesc maxjobdesc size - specifies maximal allowed size of job description in bytes. Default value is 5MB. If value is missing or 0 size is not limited. .IR Example: .br maxjobdesc="5242880" .TP .B configfile configfile service_configuration_path - If [gridftpd] and [grid-manager] configuration parts are located in separate files this configuration option allows to link them. The service_configuration_path points to configuration file containing [grid-manager] section. Use this option only if You really know what You are doing. .IR Example: .br configfile="/etc/arc.conf" .SH [infosys] block [infosys] block configures the hosting environment of the Information services (Local Info Tree, Index Service, Registrations, see the Information System manual) provided by the OpenLDAP slapd server. .TP .B infosys_compat infosys_compat - Setting this variable will cause ARC to use the old infoproviders. Basically, the new version uses A-REX to create LDIF while the old version uses a BDII provider-script to do it. The new version is required for GLUE2 output. .IR Example: .br infosys_compat="disable" .TP .B infoproviders_timeout infoproviders_timeout - this only applies to new infoproviders. it changes A-REX behaviour with respect to a single infoprovider run. Increase this value if you have many jobs in the controldir and infoproviders need more time to process. The value is in seconds. Default is 600 seconds. .IR Example: .br infoproviders_timeout = "600" .TP .B debug debug - sets the debug level/verbosity of the startup script {0 or 1}. Default is 0. .IR Example: .br debug="1" .TP .B hostname hostname - the hostname of the machine running the slapd service will be the bind for slapd. If not present, will be taken from the [common] block or guessed .IR Example: .br hostname="my.testbox" .TP .B port port - the port where the slapd service runs. Default infosys port is 2135. .IR Example: .br port="2135" .TP .B slapd_loglevel slapd_loglevel - sets the native slapd loglevel (see man slapd). Slapd logs via syslog. The default is set to no-logging (0) and it is RECOMMENDED not to be changed in a production environment. Non-zero slap_loglevel value causes serious performance decrease. .IR Example: .br slapd_loglevel="0" .TP .B slapd_hostnamebind slapd_hostnamebind - may be used to set the hostname part of the network interface to which the slapd process will bind. Most of the cases no need to set since the hostname configuration parameter is already sufficient. The default is empty. The example below will bind the slapd process to all the network interfaces available on the server. .IR Example: .br slapd_hostnamebind="*" .TP .B threads threads - the native slapd threads parameter, default is 32. If you run an Index service too you should modify this value. .IR Example: .br threads="128" .TP .B timelimit timelimit - the native slapd timelimit parameter. Maximum number of seconds the slapd server will spend answering a search request. Default is 3600. You probably want a much lower value. .IR Example: .br timelimit="1800" .TP .B idletimeout idletimeout - the native slapd idletimeout parameter. Maximum number of seconds the slapd server will wait before forcibly closing idle client connections. Its value must be larger than the value of "timelimit" option. If not set, it defaults to timelimit + 1. .IR Example: .br idletimeout="1800" .TP .B ldap_schema_dir ldap_schema_dir - allows to explicitly specify a path to the schema files. Note that this doesn't override standard location, but adds the specified path to the standard locations /etc/ldap and /etc/openldap. If you plan to relocate Glue1 and GLUE2 schemas, all these should be in the same directory that you specify here. this option does NOT apply to nordugrid.schema file. Such file has a release dependent location. Default is to use only standard locations described above. .IR Example: .br ldap_schema_dir="/nfs/ldap/schema/" .TP .B oldconfsuffix oldconfsuffix .suffix - sets the suffix of the backup files of the low-level slapd configuration files in case they are regenerated. Default is ".oldconfig". .IR Example: .br oldconfsuffix=".oldconfig" .TP .B overwrite_config overwrite_config yes|no - determines if the infosys startup scripts should generate new low-level slapd configuration files. By default the low-level configuration files are regenerated with every server startup making use of the values specified in the arc.conf. .IR Example: .br overwrite_config="yes" .TP .B registrationlog registrationlog path - specifies the logfile for the registration processes initiated by your machine. Default is "/var/log/arc/inforegistration.log" .IR Example: .br registrationlog="/var/log/arc/inforegistration.log" .TP .B providerlog providerlog path - Specifies log file location for the information provider scripts. The feature is only available with >= 0.5.26 tag. Default is "/var/log/arc/infoprovider.log" .IR Example: .br providerlog="/var/log/arc/infoprovider.log" .TP .B provider_loglevel provider_loglevel - loglevel for the infoprovider scripts (0-5). The default is 1 (critical errors are logged) .IR Example: .br provider_loglevel="2" .TP .B user user unix_user - the unix user running the infosys processes such as the slapd, the registrations and infoprovider scripts. By default the ldap-user is used, you can run it as root if you wish. In case of non-root value you must make sure that the A-REX directories and their content are readable by the 'user' and the 'user' has access to the full LRMS information including jobs submitted by other users. The A-REX directories (controldir, sessiondir runtimedir, cachedir) are specified in the [grid-manager] block .IR Example: .br user="root" .TP .B giis_location giis_location - If giis_location is not set, ARC_LOCATION will be used instead. .IR Example: .br giis_location="/usr/" .TP .B infosys_nordugrid These three variables decide which schema should be used for publishing data. They can all be enabled at the same time. Default is to enable nordugrid mds and disable glue. infosys_nordugrid - Enables NorduGrid schema .IR Example: .br infosys_nordugrid="enable" .TP .B infosys_glue12 infosys_glue12 - Enables glue1.2/1.3 schema If infosys_glue12 is enabled, then resource_location, resource_latitude and resource_longitude need to be set in the [infosys/glue12] block. These variables do not have default values. The rest of the variables defaults are showcased below. .IR Example: .br infosys_glue12="disable" .TP .B infosys_glue2_ldap infosys_glue2 - Enables GLUE2 schema .IR Example: .br infosys_glue2_ldap="disable" .TP .B infosys_glue2_ldap_showactivities infosys_glue2_ldap_showactivities - Enables GLUE2 ComputingActivities to appear in the LDAP rendering they're currently disabled by default. .IR Example: .br infosys_glue2_ldap_showactivities="disable" .TP .B infosys_glue2_service_qualitylevel infosys_glue2_service_qualitylevel - Allows a sysadmin to define a different GLUE2 QualityLevel for A-REX. This can be used for operations. default: production Allowed value is one of: "production", "pre-production", "testing", "development" Refer to GLUE2 documentation for the meaning of these strings. .IR Example: .br infosys_glue2_service_qualitylevel="production" .TP .B slapd slapd - Configure where the slapd command is located, default is: /usr/sbin/slapd .IR Example: .br slapd="/usr/sbin/slapd" .TP .B slapadd slapadd - Configure where the slapadd command is located, default is: /usr/sbin/slapadd .IR Example: .br slapadd="/usr/sbin/slapadd" .SH BDII specific Starting from 11.05, Nordugrid ARC only supports BDII5. These variables are usually automatically set by ARC, and are here mostly for debug purposes and to tweak exotic BDII5 installations. In general, a sysadmin should not set these. .TP .B bdii_debug_level bdii_debug_level - set the following to DEBUG to check bdii errors in bdii-update.log useful not to enable slapd logs reducing performance issues. .IR Example: .br bdii_debug_level="ERROR" .TP .B provider_timeout provider_timeout - This variable allows a system administrator to modify the behaviour of bdii-update. This is the time BDII waits for the scripts generated by A-REX infoproviders to produce their output. Default is 300 seconds. .IR Example: .br provider_timeout=300 .TP .B infosys_debug infosys_debug - This variable disables/enables an ldap-database containing information about the ldap database itself on "o=infosys" it is very useful for debugging. Default is enabled. .IR Example: .br infosys_debug="disable" .P BDII5 uses the following variables. These might change depending on BDII version. ARC sets them by inspecting distributed bdii configuration files. .B DO NOT CHANGE UNLESS YOU KNOW WHAT YOU'RE DOING .TP .B bdii_location bdii_location - The installation directory for the BDII. Default is /usr .IR Example: .br bdii_location="/usr" .TP .B bdii_var_dir bdii_var_dir - Contains BDII pid files and slapd pid files .IR Example: .br bdii_var_dir="/var/run/arc/bdii" .TP .B bdii_log_dir bdii_log_dir - Contains infosys logs .IR Example: .br bdii_log_dir="/var/log/arc/bdii" .TP .B bdii_tmp_dir bdii_tmp_dir - Contains provider scripts .IR Example: .br bdii_tmp_dir="/var/tmp/arc/bdii" .TP .B bdii_lib_dir bdii_lib_dir - Contains slapd databases .IR Example: .br bdii_lib_dir="/var/lib/arc/bdii" .TP .B bdii_update_pid_file bdii_update_pid_file, slapd_pid_file - Allows to change bdii-update and slapd pidfiles filename and location .IR Example: .br bdii_update_pid_file="/var/run/arc/bdii-update.pid" .br slapd_pid_file="$bdii_var_dir/db/slapd.pid" .TP .B bdii_database bdii_database - Configure what ldap database backend should be used, default is: bdb .IR Example: .br bdii_database="bdb" .P The following options are for tweaking only. Usually one should not configure them. They change the BDII configuration file generated by ARC. Please consult BDII manual for details. .TP .B bdii_conf bdii_conf - Location of the bdii configuration file. ARC modifies the original and sets it as default /var/run/arc/infosys/bdii.conf .IR Example: .br bdii_conf="/var/run/arc/infosys/bdii.conf" .P Command line options used to run bdii-update. ARC finds it looking into bdii configuration. default: ${bdii_location}/sbin/bdii-update .B bdii_update_cmd .br .B bdii_archive_size .br .B bdii_db_config .br .B bdii_breathe_time .br .B bdii_delete_delay .br .B bdii_read_timeout .br .B bdii_run_dir .br .B bindmethod .br .B cachettl .br .B db_archive .br .B db_checkpoint .SH EGIIS-related commands .TP .B giis_fifo giis_fifo - path to fifo used by EGIIS. default is /var/run/arc/giis-fifo This file is automatically created by ARC, the option is only for tweaking. .IR Example: .br giis_fifo=/var/run/arc/giis-fifo .P LDAP parameters of the cluster.pl (old) infoprovider, use the defaults, do NOT change them unless you know what you are doing .TP .B cachetime cachetime affects old infoproviders, and forces the validity time of the record. .IR Example: .br cachetime="30" .TP .B sizelimit sizelimit affects registration to egiis .IR Example: .br sizelimit="10" .TP .B slapd_cron_checkpoint slapd_cron_checkpoint - LDAP checkpoint enable/disable This option was introduced to solve bug #2032, to reduce the number of log files produced by BDII. It is usually not needed, but if BDII produces large logs and huge number of files, should help solving the issues related to that. .IR Example: .br slapd_cron_checkpoint="enable" .SH [infosys/glue12] block This block holds information that is needed by the glue 1.2 generation. This is only necessary if infosys_glue12 is enabled. .TP .B resource_location These variables need to be set if infosys_glue12 is enabled. IMPORTANT: no slashes or backslashes here! Example: "Kastrup, Denmark" .IR Example: .br resource_location="" .TP .B resource_latitude Example: "55.75000" .IR Example: .br resource_latitude="" .TP .B resource_longitude Example: "12.41670" .IR Example: .br resource_longitude="" .TP .B cpu_scaling_reference_si00 Example 2400 .IR Example: .br cpu_scaling_reference_si00="" .TP .B processor_other_description Example Cores=3,Benchmark=9.8-HEP-SPEC06 .IR Example: .br processor_other_description="" .TP .B glue_site_web Example http://www.ndgf.org .IR Example: .br glue_site_web="" .TP .B glue_site_unique_id Example NDGF-T1 .IR Example: .br glue_site_unique_id="" .TP .B provide_glue_site_info This variable decides if the GlueSite should be published. In case you want to set up a more complicated setup with several publishers of data to a GlueSite, then you may wish to tweak this parameter. .IR Example: .br provide_glue_site_info="true" .SH [infosys/site/sitename] block [infosys/site/sitename] Site BDII configuration block, this block is used to configure ARC to generate a site-bdii that can be registered in GOCDB etc to make it a part of a gLite network. The sitename part is to be declarative of the site-bdii being generated. .TP .B unique_id The unique id used to identify this site, eg "NDGF-T1" .IR Example: .br unique_id="" .TP .B url The URL is of the format: ldap://host.domain:2170/mds-vo-name=something,o=grid and should point to the resource-bdii .IR Example: .br url="" .SH [infosys/admindomain] block [infosys/admindomain] GLUE2 AdminDomain configuration block, to configure administrative items of the cluster. This values do not affect neither glue12 or nordugrid renderings. If the whole block is not specified, will default to an AdminDomain called UNDEFINEDVALUE. .TP .B name name - the Name attribute for the domain. This will show in top-BDII to group the resources belonging to this cluster. to group a bunch of clusters under the same AdminDomain, just use the same name. If not specified, will default to UNDEFINEDVALUE. .IR Example: .br name="ARC-TESTDOMAIN" .TP .B description description - description of this domain. Not mandatory. .IR Example: .br description="ARC test Domain" .TP .B www www - URL pointing at a site holding information about the AdminDomain. Not mandatory. .IR Example: .br www="http://www.nordugrid.org/" .TP .B distributed distributed - set this to yes if the domain is distributed that means, if the resources belonging to the domain are considered geographically distributed. .IR Example: .br distributed=yes .TP .B owner owner - contact email of a responsible subject for the domain .IR Example: .br owner=admin@nordugrid.org .TP .B otherinfo otherinfo - fills the OtherInfo GLUE2 field. no need to set, used only for future development. .IR Example: .br otherinfo=Test Other info .SH [infosys/index/indexname] block [infosys/index/indexname] Index Service block configures and enables an Information Index Service. A separate Index block is required for every Index Service you may run on the given machine. The 'indexname' constitutes to the 'mds-vo-name=indexname,o=grid' LDAP suffix characterizing the Index Service. .TP .B name name - The unique (within the hosting machine) name of the Index Service. Its value becomes part of the LDAP suffix of the Index Service: (mds-vo-name=value of the name attribute, o=grid) .IR Example: .br name="indexname" .TP .B allowreg allowregistration - Implements registration filtering within an Index Sevice Sets the Local Information Trees or lower level Index Services allowed to register to the Index Service. List each allowed registrants with the allowreg attribute. WARNING: specifying allowreg implies setting up a strict filtering, only the matching registrants will be able to register to the Index. The wildcard * can be used in allowreg. Several allowreg lines can be used. Some examples: -All the Swedish machines can register regardless they are resources or Indices allowreg="*.se:2135" -Cluster resources from Denmark can register allowreg="*.dk:2135/nordugrid-cluster-name=*, Mds-Vo-name=local, o=grid" -Storage resources from HIP, Finland can register allowreg="*hip.fi:2135/nordugrid-se-name=*, Mds-Vo-name=local, o=grid" -The index1.sweden.se can register as a Sweden Index (and only as a Sweden Index) allowreg="index1.sweden.se:2135/Mds-vo-Name=Sweden,o=Grid" -Any Index Service can register allowreg="*:2135/Mds-vo-Name=*,o=Grid" .IR Example: .br allowreg="trusted.host.org.se:2135/Mds-vo-Name=Trusted-Index,o=Grid" .SH [infosys/index/indexname/registration/registrationname] block [infosys/index/indexname/registration/registrationname] Index service registration block This block enables a registration process initiated by the 'indexname' Index Service (configured previuosly) to a target Index Service. NorduGrid maintains a webpage with information on major Index Services: http://www.nordugrid.org/NorduGridMDS/index_service.html .TP .B targethostname targethostname - the hostname of the machine running the registration target Index Service .IR Example: .br targethostname="index.myinstitute.org" .TP .B targetport targetport - the port on which the target Index Service is running. The default is the 2135 Infosys port. .IR Example: .br targetport="2135" .TP .B targetsuffix targetsuffix - the LDAP suffix of the target Index Service .IR Example: .br targetsuffix="mds-vo-name=BigIndex,o=grid" .TP .B regperiod regperiod - The registration period in seconds, the registration messages are continously sent according to the regperiod. Default is 120 sec. .IR Example: .br regperiod="300" .TP .B registranthostname registranthostname - the hostname of the machine sending the registrations. This attribute inherits its value from the [common] and [infosys] blocks, most cases no need to set. .IR Example: .br registranthostname="myhost.org" .TP .B registrantport registrantport - the port of the slapd service hosting the registrant Index Service. The attribute inherits its value from the [infosys] block (and therefore defaults to 2135) .IR Example: .br registrantport="2135" .TP .B registrantsuffix registrantsuffix - the LDAP suffix of the registrant Index Service. It is automatically determined from the registration block name, therefore most of the cases no need to specify. In this case the default registrantsuffix will be: "Mds-Vo-name=indexname" please mind uppercase/lowercase characters in the above string when defining allowreg in an index! Don't set it unless you want to overwrite the default. .IR Example: .br registrantsuffix="mds-vo-name=indexname,o=grid" .br .SH [cluster] block This block configures how your cluster is seen on the grid monitor (infosys point of view). Please consult the Infosys manual for detailed information on cluster attributes. If you want your cluster (configured below) to appear in the infosys (on the monitor) you also need to create a cluster registration block (see the next block). .TP .B hostname hostname - the FQDN of the frontend node, if the hostname is not set already in the common block then it MUST be set here .IR Example: .br hostname="myhost.org" .TP .B interactive_contactstring interactive_contactstring - the contact string for interactive logins, set this if the cluster supports some sort of grid-enabled interactive login (gsi-ssh), multivalued .IR Example: .br interactive_contactstring="gsissh://frontend.cluster:2200" .TP .B cluster_alias alias - an arbitrary alias name of the cluster, optional .IR Example: .br cluster_alias="Big Blue Cluster in Nowhere" .TP .B comment comment - a free text field for additional comments on the cluster in a single line, no newline character is allowed! .IR Example: .br comment="This cluster is specially designed for XYZ applications: www.xyz.org" .TP .B cluster_location cluster_location - The geographical location of the cluster, preferably specified as a postal code with a two letter country prefix .IR Example: .br cluster_location="DK-2100" .TP .B cluster_owner cluster_owner - it can be used to indicate the owner of a resource, multiple entries can be used .IR Example: .br cluster_owner="World Grid Project" .br cluster_owner="University of NeverLand" .TP .B authorizedvo authorizedvo - this attribute is used to advertise which VOs are authorized on the cluster. Multiple entries are allowed. This entries will be shown in GLUE2 AccessPolicy and MappingPolicy objects. .IR Example: .br authorizedvo="developer.nordugrid.org" .br authorizedvo="community.nordugrid.org" .TP .B clustersupport clustersupport - this is the support email address of the resource, multiple entries can be used .IR Example: .br clustersupport="grid.support@mysite.org" .br clustersupport="grid.support@myproject.org" .TP .B lrmsconfig lrmsconfig - an optional free text field to describe the configuration of your Local Resource Management System (batch system). .IR Example: .br lrmsconfig="single job per processor" .TP .B homogeneity homogeneity - determines whether the cluster consists of identical NODES with respect to cputype, memory, installed software (opsys). The frontend is NOT needed to be homogeneous with the nodes. In case of inhomogeneous nodes, try to arrange the nodes into homogeneous groups assigned to a queue and use queue-level attributes. Possible values: True,False, the default is True. False will trigger multiple GLUE2 ExecutionEnvironments to be published if applicable. .IR Example: .br homogeneity="True" .TP .B architecture architecture - sets the hardware architecture of the NODES. The "architecture" is defined as the output of the "uname -m" (e.g. i686). Use this cluster attribute if only the NODES are homogeneous with respect to the architecture. Otherwise the queue-level attribute may be used for inhomogeneous nodes. If the frontend's architecture agrees to the nodes, the "adotf" (Automatically Determine On The Frontend) can be used to request automatic determination. .IR Example: .br architecture="adotf" .TP .B opsys opsys - this multivalued attribute is meant to describe the operating system of the computing NODES. Set it to the opsys distribution of the NODES and not the frontend! opsys can also be used to describe the kernel or libc version in case those differ from the originally shipped ones. The distribution name should be given as distroname-version.number, where spaces are not allowed. Kernel version should come in the form kernelname-version.number. If the NODES are inhomogeneous with respect to this attribute do NOT set it on cluster level, arrange your nodes into homogeneous groups assigned to a queue and use queue-level attributes. .IR Example: .br opsys="Linux-2.6.18" .br opsys="glibc-2.5.58" .br opsys="CentOS-5.6" .TP .B nodecpu nodecpu - this is the cputype of the homogeneous nodes. The string is constructed from the /proc/cpuinfo as the value of "model name" and "@" and value of "cpu MHz". Do NOT set this attribute on cluster level if the NODES are inhomogeneous with respect to cputype, instead arrange the nodes into homogeneous groups assigned to a queue and use queue-level attributes. Setting the nodecpu="adotf" will result in Automatic Determination On The Frontend, which should only be used if the frontend has the same cputype as the homogeneous nodes. .IR Example: .br nodecpu="AMD Duron(tm) Processor @ 700 MHz" .TP .B nodememory nodememory - this is the amount of memory (specified in MB) on the node which can be guaranteed to be available for the application. Please note in most cases it is less than the physical memory installed in the nodes. Do NOT set this attribute on cluster level if the NODES are inhomogeneous with respect to their memories, instead arrange the nodes into homogeneous groups assigned to a queue and use queue-level attributes. .IR Example: .br nodememory="512" .TP .B defaultmemory defaultmemory - If a user submits a job without specifying how much memory should be used, this value will be taken first. The order is: xrsl -> defaultmemory -> nodememory -> 1GB. This is the amount of memory (specified in MB) that a job will request(per rank). .IR Example: .br defaultmemory="512" .TP .B benchmark benchmark name value - this optional multivalued attribute can be used to specify benchmark results on the cluster level. Use this cluster attribute if only the NODES are homogeneous with respect to the benchmark performance. Otherwise the similar queue-level attribute should be used. Please try to use one of standard benchmark names given below if possible. .IR Example: .br benchmark="SPECINT2000 222" .br benchmark="SPECFP2000 333" .TP .B middleware middleware - the multivalued attribute shows the installed grid software on the cluster, nordugrid and globus-ng is automatically set, no need to specify middleware=nordugrid or middleware=globus .IR Example: .br middleware="my grid software" .TP .B nodeaccess nodeaccess - determines how the nodes can connect to the internet. Not setting anything means the nodes are sitting on a private isolated network. "outbound" access means the nodes can connect to the outside world while "inbound" access means the nodes can be connected from outside. inbound & outbound access together means the nodes are sitting on a fully open network. .IR Example: .br nodeaccess="inbound" .br nodeaccess="outbound" .TP .B dedicated_node_string dedicated_node_string - the string which is used in the PBS node configuration to distinguish the grid nodes from the rest. Suppose only a subset of nodes are available for grid jobs, and these nodes have a common "node property" string, this case the dedicated_node_string should be set to this value and only the nodes with the corresponding "pbs node property" are counted as grid enabled nodes. Setting the dedicated_node_string to the value of the "pbs node property" of the grid-enabled nodes will influence how the totalcpus, user freecpus is calculated. You don't need to set this attribute if your cluster is fully available for the grid and your cluster's PBS configuration does not use the "node property" method to assign certain nodes to grid queues. You shouldn't use this configuration option unless you make sure your PBS configuration makes use of the above described setup. .IR Example: .br dedicated_node_string="gridnode" .TP .B localse localse - this multivalued parameter tells the BROKER that certain URLs (and locations below that) should be considered "locally" available to the cluster. .IR Example: .br localse="gsiftp://my.storage/data1/" .br localse="gsiftp://my.storage/data2/" .TP .B gm_mount_point gm_mount_point - this is the same as the "path" from the [gridftpd/jobs] block. The default is "/jobs". Will be cleaned up later, do NOT touch it. .IR Example: .br gm_mount_point="/jobs" .TP .B gm_port gm_port - this is the same as the "port" from the [gridftpd] block. The default is "2811". Will be cleaned up later. .IR Example: .br gm_port="2811" .TP .B cpudistribution cpudistribution - this is the CPU distribution over nodes given in the form: ncpu:m where n is the number of CPUs per machine m is the number of such computers Example: 1cpu:3,2cpu:4,4cpu:1 represents a cluster with 3 single CPU machines, 4 dual CPU machines, one machine with 4 CPUs. This command is needed to tweak and overwrite the values returned by the underlying LRMS. In general there is no need to configure it. .IR Example: .br cpudistribution=1cpu:3,2cpu:4,4cpu:1 .SH [infosys/cluster/registration/registrationname] block Computing resource (cluster) registration block configures and enables the registration process of a computing resource to an Index Service. A cluster can register to several Index Services this case each registration process should have its own block. NorduGrid maintains a webpage with information on major Index Services: http://www.nordugrid.org/NorduGridMDS/index_service.html .TP .B targethostname targethostname - see description earlier .IR Example: .br targethostname="index.myinstitute.org" .TP .B targetport targetport - see description earlier .IR Example: .br targetport="2135" .TP .B targetsuffix targetsuffix - see description earlier .IR Example: .br targetsuffix="mds-vo-name=BigIndex,o=grid" .TP .B regperiod regperiod - see description earlier .IR Example: .br regperiod="300" .TP .B registranthostname registranthostname - see description earlier .IR Example: .br registranthostname="myhost.org" .TP .B registrantport registrantport - see description earlier .IR Example: .br registrantport="2135" .TP .B registrantsuffix registrantsuffix - the LDAP suffix of the registrant cluster resource It is automatically determined from the [infosys] block and the registration blockname. In this case the default registrantsuffix will be: "nordugrid-cluster-name=hostname,Mds-Vo-name=local,o=Grid" please mind uppercase/lowercase characters above if defining allowreg in an index! Don't set it unless you want to overwrite the default. .IR Example: .br registrantsuffix="nordugrid-cluster-name=myhost.org,Mds-Vo-name=local,o=grid" .SH [queue/queue_name] block Each grid-enabled queue should have a separate queue block. The queuename should be used as a label in the block name. A queue can represent a PBS/LSF/SGE/SLURM/LL queue, a SGE pool, a Condor pool or a single machine in case 'fork' type of LRMS is specified in the [common] block. Queues don't need to be registered (there is no queue registration block), once you configured your cluster to register to an Index Service the queue entries (configured with this block) automatically will be there. Please consult the ARC Information System manual for detailed information on queue attributes: http://www.nordugrid.org/documents/arc_infosys.pdf use the queue_name for labeling the block. The special name 'fork' should be used for labeling the queue block in case you specified 'fork' type of LRMS in the [common] block. .TP .B name name sets the name of the grid-enabled queue. It MUST match the queue_name label of the corresponding queue block, see above. Use "fork" if you specified 'fork' type of LRMS in the [common] block. Queue name MUST be specified, even if the queue block is already correctly labeled. .IR Example: .br name="gridlong" .TP .B homogeneity homogeneity - determines whether the queue consists of identical NODES with respect to cputype, memory, installed software (opsys). In case of inhomogeneous nodes, try to arrange the nodes into homogeneous groups and assigned them to a queue. Possible values: True,False, the default is True. .IR Example: .br homogeneity="True" .TP .B scheduling_policy scheduling_policy - this optional parameter tells the schedulling policy of the queue, PBS by default offers the FIFO scheduller, many sites run the MAUI. At the moment FIFO & MAUI is supported. If you have a MAUI scheduller you should specify the "MAUI" value since it modifies the way the queue resources are calculated. BY default the "FIFO" sceduller is assumed. .IR Example: .br scheduling_policy="FIFO" .TP .B comment comment - a free text field for additional comments on the queue in a single line, no newline character is allowed! .IR Example: .br comment="This queue is nothing more than a condor pool" .TP .B maui_bin_path maui_bin_path - set this parameter for the path of the maui commands like showbf in case you specified the "MAUI" scheduling_policy above. This parameter can be set in the [common] block as well. .IR Example: .br maui_bin_path="/usr/local/bin" .TP .B queue_node_string queue_node_string - In PBS you can assign nodes to a queue (or a queue to nodes) by using the "node property" PBS node configuration method and asssigning the marked nodes to the queue (setting the resources_default.neednodes = queue_node_string for that queue). This parameter should contain the "node property" string of the queue-assigned nodes. Setting the queue_node_string changes how the queue-totalcpus, user freecpus are determined for this queue. Essentially, queue_node_string value is used to construct nodes= string in PBS script, such as nodes=count:queue_node_string where count is taken from the job description (1 if not specified). You shouldn't use this option unless you are sure that your PBS configuration makes use of the above configuration. Read NorduGrid PBS instructions for more information: http://www.nordugrid.org/documents/pbs-config.html .IR Example: .br queue_node_string="gridlong_nodes" .br queue_node_string="ppn=4:ib" .TP .B sge_jobopts sge_jobopts - additional SGE options to be used when submitting jobs to SGE from this queue. If in doubt, leave it commented out .IR Example: .br sge_jobopts="-P atlas -r yes" .TP .B condor_requirements condor_requirements - only needed if using Condor. It needs to be defined for each queue. Use this option to determine which nodes belong to the current queue. The value of 'condor_requirements' must be a valid constraints string which is recognized by a condor_status -constraint '....' command. It can reference pre-defined ClassAd attributes (like Memory, Opsys, Arch, HasJava, etc) but also custom ClassAd attributes. To define a custom attribute on a condor node, just add two lines like the ones below in the `hostname`.local config file on the node: NORDUGRID_RESOURCE=TRUE STARTD_EXPRS = NORDUGRID_RESOURCE, $(STARTD_EXPRS) A job submitted to this queue is allowed to run on any node which satisfies the 'condor_requirements' constraint. If 'condor_requirements' is not set, jobs will be allowed to run on any of the nodes in the pool. When configuring multiple queues, you can differentiate them based on memory size or disk space, for example: .IR Example: .br condor_requirements="(OpSys == "linux" && NORDUGRID_RESOURCE && Memory >= 1000 && Memory < 2000)" .TP .B lsf_architecture CPU architecture to request when submitting jobs to LSF. Use only if you know what you are doing. .IR Example: .br lsf_architecture="PowerPC" .TP .B totalcpus totalcpus - manually sets the number of cpus assigned to the queue. No need to specify the parameter in case the queue_node_string method was used to assign nodes to the queue (this case it is dynamically calculated and the static value is overwritten) or when the queue have access to the entire cluster (this case the cluster level totalcpus is the relevant parameter). Use this static parameter only if some special method is applied to assign a subset of totalcpus to the queue. .IR Example: .br totalcpus="32" .TP .B nodecpu queue-level configuration parameters: nodecpu, nodememory, architecture, opsys and benchmark should be set if they are homogeneous over the nodes assigned to the queue AND they are different from the cluster-level value. Their meanings are described in the cluster block. Usage: this queue collects nodes with "nodememory=512" while another queue has nodes with "nodememory=256" -> don't set the cluster attributes but use the queue-level attributes. When the frontend's architecture or cputype agrees with the queue nodes, the "adotf" (Automatically Determine On The Frontend) can be used to request automatic determination of architecture or nodecpu. .IR Example: .br nodecpu="adotf" .br nodememory="512" .br architecture="adotf" .br opsys="Fedora 16" .br opsys="Linux-3.0" .br benchmark="SPECINT2000 222" .br benchmark="SPECFP2000 333" .TP .B ac_policy queue access policy rules based on VOMS attributes in user's proxy certificate (requires the arc-vomsac-check plugin to be enabled). Matching rules have the following format: ac_policy="[+/-]VOMS: " Please read arc-vomsac-check manual page for more information. .IR Example: .br ac_policy="-VOMS: /badvo" .br ac_policy="VOMS: /.*/Role=production" .TP .B authorizedvo authorizedvo - this attribute is used to advertise which VOs are authorized on the specific queue. Multiple entries are allowed. This entries will be shown in the MappingPolicy objects. If something is already defined in the [cluster] block, the shown VOs will be the union set of those defined in [cluster] with those specific to this [queue] block. .IR Example: .br authorizedvo="LocalUsers" .br authorizedvo="atlas" .br authorizedvo="community.nordugrid.org" .TP .B cachetime this affects old infoproviders, and forces the validity time of the record. .IR Example: .br cachetime="30" .TP .B sizelimit sizelimit affects registration to EGIIS .IR Example: .br sizelimit="5000" .SH [registration/emir] block Services registration into EMIR block configures and enables the registration process of a services enabled in this configuration file into EMI indexing service (EMIR). .TP .B emirurls List of URL separated by comma of EMIR services which are to accept registration. This is mandatory. .IR Example: .br emirurls="https://somehost:60002/emir" .TP .B validity Time in seconds for which registration records should stay valid. .IR Example: .br validity=600 .TP .B period Time in seconds how othen registration record should be sent to the registration service. .IR Example: .br period=60 .TP .B disablereg_xbes disablereg_xbes may be used to selectively disable registration of A-REX service. Possible values are yes and no. Default is no, .IR Example: .br disablereg_xbes="no" .SH [nordugridmap] block [nordugridmap] block configuration is used to fine-tune behaviour of the nordugridmap - an ARC tool used to generate grid-mapfiles. Please refer to [vo] block description to find information how to specify VO sources for mapfile generation. This section setup general VO-independent parameters. .TP .B x509_user_key x509_user_cert, x509_user_key - public certificate and privat key to be used when fetching sources over TLS (https:// and vomss:// sources retrieval rely on this parameter) If not specified, values defined in [common] section will be used. If there is also no [common] section, X509_USER_{CERT,KEY} variables are used. Default is '/etc/grid-security/host{cert,key}.pem' .IR Example: .br x509_user_key="/etc/grid-security/hostkey.pem" .br x509_user_cert="/etc/grid-security/hostcert.pem" .TP .B x509_cert_dir x509_cert_dir - the directory containing CA certificates. This information is needed by the 'require_issuerdn' [vo] block option. Default is '/etc/grid-security/certificates/'. .IR Example: .br x509_cert_dir="/etc/grid-security/certificates/" .TP .B generate_vomapfile generate_vomapfile - control is nordugridmap will generate vo-mapfile used by arc-ur-logger. Default is 'yes'. .IR Example: .br generate_vomapfile="yes" .TP .B vomapfile vomapfile - path to vo-mapfile location. Default is /etc/grid-security/grid-vo-mapfile .IR Example: .br vomapfile="/etc/grid-security/grid-vo-mapfile" .TP .B log_to_file log_to_file - control whether logging output of nordugridmap will be saved to file. Default is 'no' (STDERR is used). .IR Example: .br log_to_file="yes" .TP .B logfile logfile - specify the nordugridmap log file location when in use. Default is '/var/log/arc/nordugridmap.log'. .IR Example: .br logfile="/var/log/arc/nordugridmap.log" .TP .B cache_enable cache_enable - control whether caching of external sources will be used. Default is 'yes'. .IR Example: .br cache_enable="yes" .TP .B cachedir cachedir - specify the path where cached sources will be stored. Default is '/var/spool/nordugrid/gridmapcache/' .IR Example: .br cachedir="/var/spool/nordugrid/gridmapcache/" .TP .B cachetime cachetime - controls how many time (in seconds) cached information remains valid. Default is 259200 (3 days). .IR Example: .br cachetime="259200" .TP .B issuer_processing issuer_processing - control the behaviour of [vo] block require_issuerdn parameter. Valid values are 'relaxed' and 'strict'. Please see 'require_issuerdn' description in [vo] block for details. Default is 'relaxed'. .IR Example: .br issuer_processing="relaxed" .TP .B mapuser_processing mapuser_processing - control the behaviour of [vo] block mapped_unixid parameter usage. Valid values are 'overwrite' and 'keep'. Please see 'mapped_unixid' description in [vo] block for details. Default is 'keep'. .IR Example: .br mapuser_processing="keep" .TP .B allow_empty_unixid allow_empty_unixid - control whether empty (or unspecified) 'mapped_unixid' [vo] block option is allowed to be used. Please see 'mapped_unixid' description of [vo] block for details. Default is 'no' .IR Example: .br allow_empty_unixid="no" .TP .B voms_method voms_method - control how to get information from voms(s) sources. Valid values are: soap - call SOAP method directly using SOAP::Lite get - use old implementation that manually parses XML response Default is 'soap'. .IR Example: .br voms_method="soap" .TP .B debug debug level - controls the verbosity of nordugridmap output. Valid values are: 0 - FATAL - only critical fatal error shown 1 - ERROR - errors, including non-critical are shown 2 - WARNING (default) - configuration errors that can be ignored 3 - INFO - processing information 4 - VERBOSE - a bit more processing information 5 - DEBUG - lot of processing information When test run is requested (--test command line option of the nordugridmap) debug level is automatically set to 5 (DEBUG). Default is 2 (WARNING) .IR Example: .br debug="4" .TP .B fetch_timeout fetch_timeout - control how many time (in seconds) nordugridmap will wait for external sources retrieval. Default is 15. .IR Example: .br fetch_timeout="15" .SH [acix/cacheserver] block The cache server component of ACIX runs alongside A-REX. It periodically scans the cache directories and composes a Bloom filter of cache content which can be pulled by an ACIX index server. .TP .B hostname Hostname on which the cache server listens. Default is all available interfaces. .IR Example: .br hostname="myhost.org" .TP .B port Port on which the cache server listens. Default is 5443. .IR Example: .br port="6000" .TP .B logfile Log file location for the cache server. Default is /var/log/arc/acix-cache.log .IR Example: .br logfile="/tmp/acix-cache.log" .TP .B cachedump Whether to make a dump of the cache contents in a file at $TMP/ARC-ACIX/timestamp each time the cache server runs. Default is no. .IR Example: .br cachedump="yes" .SH [acix/indexserver] block The index server component of ACIX collects cache content filters from a set of cache servers configured in this block. The index server can be queried for the location of cached files. .TP .B cacheserver ACIX cache servers from which to pull information .IR Example: .br cacheserver="https://some.host:5443/data/cache" .br cacheserver="https://another.host:5443/data/cache" .SH [gangliarc] block Gangliarc provides monitoring of ARC-specific metrics through ganglia. It can be run with zero configuration or customised with options in the [gangliarc] block. .TP .B frequency The period between each information gathering cycle, in seconds. Default is 20. .IR Example: .br frequency="30" .TP .B gmetric_exec Path to gmetric executable. Default is /usr/bin/gmetric. .IR Example: .br gmetric_exec="/usr/local/bin/gmetric" .TP .B logfile log file of the daemon. Default is /var/log/arc/gangliarc.log. .IR Example: .br logfile="/tmp/gangliarc.log" .TP .B pidfile pid file of the daemon. Default is /var/run/gangliarc.pid. .IR Example: .br pidfile="/tmp/gangliarc.pid" .TP .B python_bin_path path to python executable. Default is /usr/bin/python. .IR Example: .br python_bin_path="/usr/local/bin/python" .TP .B metrics the metrics to be monitored. Default is "all". metrics takes a comma-separated list of one or more of the following metrics: - staging -- number of tasks in different data staging states - cache -- free cache space - session -- free session directory space - heartbeat -- last modification time of A-REX heartbeat - processingjobs -- the number of jobs currently being processed by ARC (jobs between PREPARING and FINISHING states) - failedjobs -- the number of failed jobs per last 100 finished - jobstates -- number of jobs in different A-REX internal stages - all -- all of the above metrics .IR Example: .br metrics="all"