Amazon S3 Bucket Explorer

broken image


Bucket Explorer Features: If Amazon S3 API supports an operation, Bucket Explorer supports it too. Multipart upload and download for large files, multi threaded batch S3 operations like ACL and Metadata updates, Set default settings while uploading objects, Fast Object listings, Comparer to compare and synchronize data between source and target. To create an S3 bucket In AWS Explorer, open the context (right-click) menu for the Amazon S3 node, and then choose Create Bucket. In the Create Bucket dialog box, type a name for the bucket. Bucket names must be unique across AWS.

  1. Amazon S3 Bucket Explorer Review
  2. S3 Explorer Windows
  3. Amazon S3 Bucket Explorer Sport Trac
  • Update April 15, 2019:
    Include a enhancement to the UDF to work with HTTP and HTTPS simultaneously;
    Include a enhancement to the UDF to work with URL encoded and NOT encoded (it's necessary comment/uncomment the line).

Today I'll explain step-by-step how to calculate the signature to authenticate and download a file from the Amazon S3 Bucket service without third-party adapters.

Request

In summary this interface receive download URL, Bucket, AccessKeyID, SecretAccessKey, Token and AWSRegion, a mapping calculate the signature with this information and sent to REST Adapter, the signature and anothers parameters are insert in HTTP header.

Some information for calculate the signature are provide another service, this post explain only how to calculate, but is possible implemented enhancements, for example, create a rest/soap lookup to get a Token and SecretAccessKey.

Response

The response is a file, and the REST Adapter don't work with format different of XML or JSON, then you will need convert the file to binary, and this content are insert in a tag of XML. For this conversion I recommend a module adapter FormatConversionBean developed by @engswee.yeoh

Request mapping

For the request mapping you need create a two structures, one for inbound and another for outbound.

Inbound

Outbound

After create the structures for the request mapping (data type, message type, etc), you need create a message mapping.

Amazon S3 Bucket Explorer Review

Where do my screenshots go. Now you need to map the fields, pay attention to the next steps for configurations the roles.

Roles for Message Mapping

  • Fields XAmzSecurityTokenand Urlare mapped directly….
  • Field XAmzSha256 is mapped with a constant value e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 (this string is a hash of a null value)
  • Field XAmzDate is mapped with a CurrentDate (format yyyyMMdd'T'HHmmss'Z') function…
  • Field ContentType is mapped with a constant value application/x-www-form-urlencoded
  • Field Host is mapped with a UDF or ConstantValue.

The Host is a result of concatenation of the Bucket+'.s3.amazonaws.com',
so you can use a ConstantValue (eu01-s3-store.s3.amazonaws.com for example), which receives the bucket and returns the Host

  • Field Authorizathion is mapped with a UDF.
Amazon s3 bucket explorer accessories

In field Authorization you have insert the signature calculated with the UDF below.

You also need to create some methods, which will be used by UDF in signing.

and import the packages…

After developed the UDF, its necessary configure with the inbound values.

Note: the format of CurrentDate is yyyyMMdd'T'HHmmss'Z'.

Now save and Activate the Request mapping.

Response mapping

The response mapping it's simple and not necessary many explanation.

Configure the interface normally …

S3 Explorer Windows

After created Request/Response Mapping, build the Operation Mapping and a Integrated Configuration normally. The Communication Channel can be of any type that is synchronous, but the Receiver must be to type rest and configured as below.

Amazon S3 Bucket Explorer Sport Trac

Receiver Communication Channel

Now you need configure the Receiver Channel, for this the values generates in request message mapping are storage in variables, and this variables are used in the communication channel.

Now the variables storage are used in the HTTP Header, here you configure how the canonical request is create.

It's necessary configure the REST Operation, for this case the operation is GET.

And finally configure the module adapter FormatConversionBean to converte the file in b64string.

IMPORTANTE: The module adapter FormatConversionBean isn't standard, and you need deploy if you have not already, for more information and download of module you can access here.

Save and active all objects, now we going test!

Fill in all the fields correctly in the interface and call the created service, the response should be the file in b64string format.

If you analyze the log of request messages, the parameters are populated in the HTTP header and communication has succeeded (HTTP 200)

and the response (the file) is converted to b64 string.

That's all! I hope I have collaborated and I am waiting for your feedback on this post.

References

How to Calculate AWS Signature Version 4
https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html

Module Adapter FormatConversionBean
https://blogs.sap.com/2015/03/25/formatconversionbean-one-bean-to-rule-them-all/

PI REST Adapter – Define custom http header elements
https://blogs.sap.com/2015/04/14/pi-rest-adapter-define-custom-http-header-elements/

Versioning is an exciting feature that makes Amazon S3 allow you to create versions of objects instead of overwriting them and keep them on S3. Versioning support dramatically increases a range of Amazon S3 of possible applications and it is only limited by your imagination. It is currently supported across all Amazon S3 regions.

CloudBerry Explorer comes with support for Amazon S3 bucket versioning. You can turn on versioning aware mode, turn on versioning for specific buckets and perform common file operations on versions. This article will give you some ideas on how to get started.

How CloudBerry Explorer Supports Versioning

To turn versioning on for a specific bucket click versioning in the context menu and check the appropriate checkbox. Now you will be able to create versions of the objects for that bucket.

Try to copy an object with the same name to the bucket several times. To see versions click Show Versions item in the context menu

This option will show a bottom panel with the list of versions for a selected file. The current revision is a version of the object that is available using the regular Amazon S3 API. Now you can do all the regular file operations with the versions such as copy, delete, move, rename, etc.

What About Deleted Files?

As you can guess (or read in the versioning documentation) when you turn the versioning on for a bucket, and then delete a file a new version is created with the deleted attribute. This is awesome if you delete the file by mistake as you can quickly restore it. To show the deleted file you have to turn on a corresponding option. This option is global and found in Tools | Options in the main menu.

Once you have done that try deleting a file in the bucket with versioning turned on. You will notice that the file becomes grayed and you will see a new version created in the versioning panel marked as deleted.

If you want to restore a deleted file or just any version of a file select the version and click the Restore button. The file will be available again.

Our MSP360 Backup users know that we have our own implementation of file versioning. We have done that a while ago prior to Amazon announcing its versioning implementation. With that in mind, we'd like to urge you not to enable Amazon's versioning if you're already using MSP360 Backup's versioning mechanism. Having two different versioning mechanisms is superfluous and will result in AWS surcharges.

Featured Product

FEATURED PRODUCT

  • File management in Amazon S3 and S3-compatible storage
  • Encryption and compression
  • GUI and CLI




broken image