Scroll to navigation

Amazon::S3::Bucket(3pm) User Contributed Perl Documentation Amazon::S3::Bucket(3pm)


Amazon::S3::Bucket - A container class for a S3 bucket and its contents.


  use Amazon::S3;
  # creates bucket object (no "bucket exists" check)
  my $bucket = $s3->bucket("foo"); 
  # create resource with meta data (attributes)
  my $keyname = 'testing.txt';
  my $value   = 'T';
      $keyname, $value,
      {   content_type        => 'text/plain',
          'x-amz-meta-colour' => 'orange',
  # list keys in the bucket
  $response = $bucket->list
      or die $s3->err . ": " . $s3->errstr;
  print $response->{bucket}."\n";
  for my $key (@{ $response->{keys} }) {
        print "\t".$key->{key}."\n";  
  # check if resource exists.
  print "$keyname exists\n" if $bucket->head_key($keyname);
  # delete key from bucket




Instaniates a new bucket object.

Pass a hash or hash reference containing various options:

The name (identifier) of the bucket.
The S3::Amazon object (representing the S3 account) this bucket is associated with.
The buffer size used for reading and writing objects to S3.

default: 4K

If no region is set and "verify_region" is set to true, the region of the bucket will be determined by calling the "get_location_constraint" method. Note that this will decrease performance of the constructor. If you know the region or are operating in only 1 region, set the region in the "account" object ("Amazon::S3").
Sets the logger object (should be an object capable of providing at least a "debug" and "trace" method for recording log messages. If no logger object is passed the "account" object's logger object will be used.
Indicates that the bucket's region should be determined by calling the "get_location_constraint" method.

default: false

NOTE: This method does not check if a bucket actually exists unless you set "verify_region" to true. If the bucket does not exist, the constructor will set the region to the default region specified by the Amazon::S3 object ("account") that you passed.

Typically a developer will not call this method directly, but work through the interface in S3::Amazon that will handle their creation.


 add_key( key, value, configuration)

Write a new or existing object to S3.

A string identifier for the object being written to the bucket.
A SCALAR string representing the contents of the object..
A HASHREF of configuration data for this key. The configuration is generally the HTTP headers you want to pass to the S3 service. The client library will add all necessary headers. Adding them to the configuration hash will override what the library would send and add headers that are not typically required for S3 interactions.
acl_short (optional)
In addition to additional and overridden HTTP headers, this HASHREF can have a "acl_short" key to set the permissions (access) of the resource without a separate call via "add_acl" or in the form of an XML document. See the documentation in "add_acl" for the values and usage.

Returns a boolean indicating the sucess or failure of the call. Check "err" and "errstr" for error messages if this operation fails. To examine the raw output of the response from the API call, use the "last_response()" method.

  my $retval = $bucket->add_key('foo', $content, {});
  if ( !$retval ) {
    print STDERR Dumper([$bucket->err, $bucket->errstr, $bucket->last_response]);


The method works like "add_key" except the value is assumed to be a filename on the local file system. The file will be streamed rather then loaded into memory in one big chunk.

head_key $key_name

Returns a configuration HASH of the given key. If a key does not exist in the bucket "undef" will be returned.

HASH will contain the following members:

get_key $key_name, [$method]

Takes a key and an optional HTTP method and fetches it from S3. The default HTTP method is GET.

The method returns "undef" if the key does not exist in the bucket and throws an exception (dies) on server errors.

On success, the method returns a HASHREF containing:


get_key_filename $key_name, $method, $filename

This method works like "get_key", but takes an added filename that the S3 resource will be written to.

delete_key $key_name

Permanently removes $key_name from the bucket. Returns a boolean value indicating the operations success.


Permanently removes the bucket from the server. A bucket cannot be removed if it contains any keys (contents).

This is an alias for "$s3-"delete_bucket($bucket)>.


List all keys in this bucket.

See "list_bucket" in Amazon::S3 for documentation of this method.


See "list_bucket_v2" in Amazon::S3 for documentation of this method.


List all keys in this bucket without having to worry about 'marker'. This may make multiple requests to S3 under the hood.

See "list_bucket_all" in Amazon::S3 for documentation of this method.


Same as "list_all" but uses the version 2 API for listing keys.

See "list_bucket_all_v2" in Amazon::S3 for documentation of this method.


Retrieves the Access Control List (ACL) for the bucket or resource as an XML document.

The key of the stored resource to fetch. This parameter is optional. By default the method returns the ACL for the bucket itself.



Retrieves the Access Control List (ACL) for the bucket or resource. Requires a HASHREF argument with one of the following keys:

An XML string which contains access control information which matches Amazon's published schema.
Alternative shorthand notation for common types of ACLs that can be used in place of a ACL XML document.

According to the Amazon S3 API documentation the following recognized acl_short types are defined as follows:

Owner gets FULL_CONTROL. No one else has any access rights. This is the default.
Owner gets FULL_CONTROL and the anonymous principal is granted READ access. If this policy is used on an object, it can be read from a browser with no authentication.
Owner gets FULL_CONTROL, the anonymous principal is granted READ and WRITE access. This is a useful policy to apply to a bucket, if you intend for any anonymous user to PUT objects into the bucket.
Owner gets FULL_CONTROL, and any principal authenticated as a registered Amazon S3 user is granted READ access.
The key name to apply the permissions. If the key is not provided the bucket ACL will be set.

Returns a boolean indicating the operations success.


Returns the location constraint (region the bucket resides in) for a bucket.

Valid values that may be returned:


For more information on location constraints, refer to the documentation for GetBucketLocation <>.


The S3 error code for the last error the account encountered.


A human readable error string for the last error the account encountered.


The decoded XML string as a hash object of the last error.


Returns the last "HTTP::Response" to an API call.


From Amazon's website:

Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.

See <> for more information about multipart uploads.

  • Maximum object size 5TB
  • Maximum number of parts 10,000
  • Part numbers 1 to 10,000 (inclusive)
  • Part size 5MB to 5GB. There is no limit on the last part of your multipart upload.
  • Maximum nubmer of parts returned for a list parts request - 1000
  • Maximum number of multipart uploads returned in a list multipart uploads request - 1000

A multipart upload begins by calling "initiate_multipart_upload()". This will return an identifier that is used in subsequent calls.

 my $bucket = $s3->bucket('my-bucket');
 my $id = $bucket->initiate_multipart_upload('some-big-object');
 my $part_list = {};
 my $part = 1;
 my $etag = $bucket->upload_part_of_multipart_upload('my-bucket', $id, $part, $data, length $data);
 $part_list{$part++} = $etag;
 $bucket->complete_multipart_upload('my-bucket', $id, $part_list);

 upload_multipart_object( ... )

Convenience routine "upload_multipart_object" that encapsulates the multipart upload process. Accepts a hash or hash reference of arguments. If successful, a reference to a hash that contains the part numbers and etags of the uploaded parts.

You can pass a data object, callback routine or a file handle.

Name of the key to create.
Scalar object that contains the data to write to S3.
Optionally provided a callback routine that will be called until you pass a buffer with a length of 0. Your callback will receive no arguments but should return a tuple consisting of a reference to a scalar object that contains the data to write and a scalar that represents the length of data. Once you return a zero length buffer the multipart process will be completed.
File handle of an open file. The file must be greater than the minimum chunk size for multipart uploads otherwise the method will throw an exception.
Indicates whether the multipart upload should be aborted if an error is encountered. Amazon will charge you for the storage of parts that have been uploaded unless you abort the upload.

default: true


 abort_multipart_upload(key, multpart-upload-id)

Abort a multipart upload


 complete_multipart_upload(key, multpart-upload-id, parts)

Signal completion of a multipart upload. "parts" is a reference to a hash of part numbers and etags.


 initiate_multipart_upload(key, headers)

Initiate a multipart upload. Returns an id used in subsequent call to "upload_part_of_multipart_upload()".


List all the uploaded parts of a multipart upload


List multipart uploads in progress


  upload_part_of_multipart_upload(key, id, part, data, length)

Upload a portion of a multipart upload

Name of the key in the bucket to create.
The multipart-upload id return in the "initiate_multipart_upload" call.
The next part number (part numbers start at 1).
Scalar or reference to a scalar that contains the data to upload.
Length of the data.




Please see the Amazon::S3 manpage for author, copyright, and license information.


Hey! The above document had some coding errors, which are explained below:

Unknown directive: =heads
2022-08-03 perl v5.34.0