AWS: Python S3

This entry is part 3 of 3 in the series AWS & Python
(Last Updated On: )

If you haven’t already done so please refer to the AWS setup section which is part of this series. As time goes on I will continually update this section.

To work with S3 you need to utilise the “connection_s3” you setup already in the previous tutorial on setting up the connection.

To load a file from a S3 bucket you need to know the bucket name and the file name.

connection_s3.Object(##S3_BUCKET##, ##FILE_NAME##).load()

If you want to check if the check if a file exists on S3 you do something like the below. However you will need to import botocore.

import botocore

def keyExists(key):
    file = connection_s3.Object(##S3_BUCKET##, ##FILE_NAME##)
    
    try:
        file.load()
    except botocore.exceptions.ClientError as e:
        exists = False
    else:
        exists = True
    
    return exists, file

If you want to copy a file from one bucket to another or sub folder you can do it like below.

connection_s3.Object(##S3_DESTINATION_BUCKET##, ##FILE_NAME##).copy_from(CopySource=##S3_SOURCE_BUCKET## + '/' + ##FILE_NAME##)

If you want to delete the file you can use the “keyExists” function above and then just call “delete”.

##FILE##.delete()

If you want to just get a bucket object. Just need to specify what bucket and utilise the S3 connection.

bucket = connection_s3.Bucket(##S3_BUCKET##)

To upload a file to S3’s bucket. You need to set the body, type and name. Take a look at the below example.

bucket.put_object(Body=##DATA##,ContentType="application/zip", Key=##FILE_NAME##)

If you want to loop over the objects in a bucket. It’s pretty straight forward.

for key in bucket.objects.all():
	file_name = key.key
	response = key.get()
	data = response_get['Body'].read()

If you want to filter objects in a bucket.

for key in bucket.objects.filter(Prefix='##PREFIX##').all():
        file_name = key.key
	response = key.get()
	data = response_get['Body'].read()
Series Navigation<< AWS: Python Kinesis Streams