StrategicERP Integration with AWS S3 Drive

2023-10-28 12:48:52

 Steps to create Amazon s3 bucket

Login to your account on

1. Go to Services --> S3 -- here you in Amazon S3 (

2. click on Buckets --> on right click on 'Create Buckets'

3. General configuration --> Bucket name: Strategicerp505 ; AWS Region : Asia Pacific (Mumbai) ap-south-1

4. Object Ownership  --> ACLs disabled (recommended)

5. Block Public Access settings for this bucket --> check Block Public Access settings for this bucket

6. Bucket Versioning --> Disable


8. Default encryption --> use -- Server-side encryption with AWS Key Management Service keys (SSE-KMS)  ; Bucket Key :Enable

9. Advanced settings --> Disable 

10. Click on : Create bucket .. 

    Here your bucket is created                       


To Create User : Identity and Access Management (IAM)

Click on Create user and follow the steps  (it is necessary to create user : Access key ID; Secret access key)


To Create User : Identity and Access Management (IAM)


1.user name : itaakash

Access type :  programatic access

2.Set Permission : 1. Add user to group (create group or add to creted)

                 3. Attach existing policies directly : Amazons3FullAccess

3.Add Tags : (Optional) : Keys , Values :  ---- on : Create user

5.Download Access key and Secret key

Configuring AWS IAM User Credentials to Access Amazon S3


How to  Mount Amazon S3 as a Drive for Cloud File Sharing

      Amazon S3 (or Simple Storage Services). Data is stored in “Buckets”, the basic unit of data storage in Amazon S3.S3 stores the data as objects called “Buckets” and each object can be up to 5 TB in size.

     If you want access to AWS S3 to be available without a web browser, you can allow users to use the interface of an operating system such as Linux or Windows. 

   Access to Amazon S3 cloud storage from the command line can be handy in several scenarios. This is particularly useful with operating systems that don’t have a graphical user interface (GUI), in particular VMs running in a public cloud, and for automating tasks such as copying files or creating cloud data backups.

  Mounting Amazon S3 Cloud Storage in Linux:

A.  AWS provides an API to work with Amazon S3 buckets using third-party applications. You can even write your own application that can interact with S3 buckets by using the Amazon API. 

B. You can create an application that uses the same path for uploading files to Amazon S3 cloud storage and provide the same path on each computer by mounting the S3 bucket to the same directory with S3FS.

  In this Knowledge base  we use S3FS to mount an Amazon S3 bucket as a disk drive to a Linux directory.S3FS, a special solution based on FUSE (file system in user space), was developed to mount S3 buckets to directories of Linux operating systems similarly to the way you mount CIFS or NFS share as a network drive.

Let’s find out how to mount an Amazon S3 bucket to a Linux directory with centos7

  1. Update the repository tree:

sudo yum update

  2. Install s3fs from online software repositories:

sudo yum install s3fs-fuse

  3. You need to generate the access key ID and secret access key in the AWS web interface for your account (IAM user). The IAM user must have S3 full access. You can use this link:

  4. a) These keys are needed for AWS API access. You must have administrative permissions to generate the AWS access key ID and AWS secret access key.

  The administrator can generate the AWS keys for a user account in the Users section of the AWS console in the Security credentials tab by clicking the Create access key button.

  b) In the Create access key popup window click Download .csv file or click Show under the Secret access key row name. This is the only case when you can see the secret access key in the AWS web interface. Store the AWS access key ID and secret access key in a safe place.


  5. Go back to the Centos7 console to create a configuration file for storing the AWS access key and secret access key needed to mount an S3 bucket with S3FS.


Change ACCESS_KEY to your AWS access key and SECRET_ACCESS_KEY to your secret access key.

In this example, we will store the configuration file with the AWS keys in the home directory of our user. Make sure that you store the file with the keys in a safe place that is not accessible by unauthorized persons.

echo AKIA2SK3HPQ7FLWO5XYZ:esrhLH4m1Da+3fJoU5xet1/ivsZ+Pay73BcSnzP > ~/.passwd-s3fs

  6. Check whether the keys were written to the file:

cat ~/.passwd-s3fs

  7. Set correct permissions for the passwd-s3fs file where the access keys are stored:

chmod 600 ~/.passwd-s3fs

  8. Create the directory (mount point) that will be used as a mount point for your S3 bucket. In this example, we create the Amazon cloud drive S3 directory in the home user’s directory.

mkdir ~/s3-bucket

  9. Now check on browser : The name of the bucket used in this walkthrough is Strategicerp505 . The text1.txt file is uploaded to our strategicerp505 in Amazon S3 before mounting the bucket to a Linux directory. It is not recommended to use a dot (.) in bucket names.



10. Now on Centos7 console : Let’s mount the bucket. Use the following command to set the bucket name, the path to the directory used as the mount point and the file that contains the AWS access key and secret access key.

s3fs bucket-name /path/to/mountpoint -o passwd_file=/path/passwd-s3fs

In our case, the command we use to mount our bucket is:

s3fs Strategicerp505 ~/s3-bucket -o passwd_file=~/.passwd-s3fs

  11. The bucket is mounted. We can run the commands to check whether our bucket (Strategicerp505) has been mounted to the s3-bucket directory:

mount | grep bucket

df -h | grep bucket

  12. Let’s check the contents of the directory to which the bucket has been mounted:

ls -al ~/s3-bucket

As you can see on the screenshot below, the test1.txt file uploaded via the web interface before is present and displayed in the console output.


  13. Now you can try to create a new file on your hard disk drive and copy that file to the S3 bucket in your Linux console.

echo test2 > test2.txt

cp test2.txt ~/s3-bucket/

  14. Update the AWS web page where your files in the bucket are displayed. You should see the new test2.txt file copied to the S3 bucket in the Linux console by using the directory to which the bucket is mounted.




B. How to mount an S3 bucket on Linux boot automatically

If you want to configure automatic mount of an S3 bucket with S3FS on your Linux machine, you have to create the passwd-s3fs file in /etc/passwd-s3fs, which is the standard location. After creating this file, you don’t need to use the -o passwd_file key to set the location of the file with your AWS keys manually.

 1. Create the /etc/passwd-s3fs file:

vim /etc/passwd-s3fs

i. Enter your AWS access key and secret access key as explained above.


ii. As an alternative you can store the keys in the /etc/passwd-s3fs file with the command:

echo AKIA4SK3HPQ9FLWO8AMB:esrhLH4m1Da+3fJoU5xet1/ivsZ+Pay73BcSnzcP > /etc/passwd-s3fs

2. Set the required permissions for the /etc/passwd-s3fs file:

chmod 640 /etc/passwd-s3fs

3. Edit the FUSE configuration file:


Uncomment the user_allow_other string if you want to allow using Amazon S3 for file sharing by other users (non-root users) on your Linux machine.


4. Open /etc/fstab with a text editor:

vim /etc/fstab

Add the line in the end of the file:

s3fs#Strategicerp505 /home/user1/s3-bucket/ fuse _netdev,allow_other,url= 0 0




Save the edited /etc/fstab file and quit the text editor.

5. Reboot the Cebtos7 machine to check whether the S3 bucket is mounted automatically on system boot:

init 6

Wait until your Linux machine is booted.

6.  You can run commands to check whether the AWS S3 bucket was mounted automatically to the s3-bucket directory son Cebtos7 boot.

mount | grep bucket

df -h | grep bucket

ls -al /home/user1/s3-bucket/

In our case, the Amazon cloud drive S3 has been mounted automatically to the specified Linux directory on Cebtos7 boot (see the screenshot below). The configuration was applied successfully.

Now you can try to create a new file on your hard disk drive and copy that file to the S3 bucket in your Linux console.

echo test2 > test2.txt

cp test2.txt ~/s3-bucket/

Note: S3FS also supports working with rsync and file caching to reduce traffic.


Related Articles