Skip to content

Commit 5fafc62

Browse files
committed
related docs links, resource naming
1 parent 23732d8 commit 5fafc62

8 files changed

Lines changed: 264 additions & 247 deletions

File tree

tuts/003-s3-gettingstarted/README.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,3 +50,10 @@ echo 'y' | bash s3-gettingstarted.sh
5050
## Cost
5151

5252
Free tier eligible. Minimal charges for a few small objects.
53+
54+
## Related docs
55+
56+
- [Getting started with Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html)
57+
- [Using versioning in S3 buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html)
58+
- [Setting default server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html)
59+
- [Blocking public access to your S3 storage](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html)

tuts/003-s3-gettingstarted/s3-gettingstarted.md

Lines changed: 136 additions & 127 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Amazon S3 stores data as objects within containers called buckets. Each bucket m
1919
First, let's generate a unique bucket name and determine your AWS region:
2020

2121
```
22-
BUCKET_NAME="demo-s3-bucket-$(openssl rand -hex 6)"
22+
BUCKET_NAME="amzn-s3-demo-$(cat /dev/urandom | tr -dc 'a-f0-9' | fold -w 12 | head -n 1)"
2323
REGION=$(aws configure get region)
2424
REGION=${REGION:-us-east-1}
2525
@@ -36,210 +36,209 @@ aws s3api create-bucket --bucket "$BUCKET_NAME"
3636
# For all other regions
3737
aws s3api create-bucket \
3838
--bucket "$BUCKET_NAME" \
39-
--region "$REGION" \
4039
--create-bucket-configuration LocationConstraint="$REGION"
4140
```
4241

4342
The output shows the location URL of your new bucket:
4443

4544
```
4645
{
47-
"Location": "http://demo-s3-bucket-abcd1234.s3.amazonaws.com/"
46+
"Location": "http://amzn-s3-demo-abcd1234abcd.s3.amazonaws.com/"
4847
}
4948
```
5049

51-
After creating your bucket, it's important to configure security settings. Let's apply some best practices:
50+
## Upload an object
5251

53-
**Block public access (recommended for security)**
52+
Now that your bucket is created, let's upload a file. First, create a sample text file:
5453

5554
```
56-
aws s3api put-public-access-block \
57-
--bucket "$BUCKET_NAME" \
58-
--public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
59-
```
60-
61-
**Enable versioning (helps protect against accidental deletion)**
62-
63-
```
64-
aws s3api put-bucket-versioning \
65-
--bucket "$BUCKET_NAME" \
66-
--versioning-configuration Status=Enabled
67-
```
68-
69-
**Set default encryption (protects your data at rest)**
70-
71-
```
72-
aws s3api put-bucket-encryption \
73-
--bucket "$BUCKET_NAME" \
74-
--server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
75-
```
76-
77-
## Upload objects to your bucket
78-
79-
Now that your bucket is configured, let's upload some files. First, create a sample text file:
80-
81-
```
82-
echo "This is a sample file for the S3 tutorial." > sample-file.txt
55+
echo "Hello, Amazon S3! This is a sample file for the getting started tutorial." > sample.txt
8356
```
8457

8558
Upload this file to your bucket:
8659

8760
```
8861
aws s3api put-object \
8962
--bucket "$BUCKET_NAME" \
90-
--key "sample-file.txt" \
91-
--body "sample-file.txt"
63+
--key "sample.txt" \
64+
--body "sample.txt"
9265
```
9366

94-
The response includes an ETag (entity tag) that uniquely identifies the content of the object, and since we enabled encryption, it also shows the encryption method:
67+
The response includes an ETag (entity tag) that uniquely identifies the content of the object:
9568

9669
```
9770
{
98-
"ETag": "\"4f4cf806569737e1f3ea064a1d4813db\"",
99-
"ServerSideEncryption": "AES256",
100-
"VersionId": "9RCg6lFF_CmB.r_YlMS8sdPBiv878gQI"
71+
"ETag": "\"abcd1234abcd1234abcd1234abcd1234\""
10172
}
10273
```
10374

104-
You can also upload files with additional metadata. Let's create another file and add some metadata to it:
105-
106-
```
107-
echo "This is a document with metadata." > sample-document.txt
108-
109-
aws s3api put-object \
110-
--bucket "$BUCKET_NAME" \
111-
--key "documents/sample-document.txt" \
112-
--body "sample-document.txt" \
113-
--content-type "text/plain" \
114-
--metadata "author=AWSDocumentation,purpose=tutorial"
115-
```
116-
117-
Notice that we used `documents/` in the key name. This creates a logical folder structure in your bucket, even though S3 is actually a flat object store.
118-
11975
## Download and verify objects
12076

12177
To download an object from your bucket to your local machine:
12278

12379
```
12480
aws s3api get-object \
12581
--bucket "$BUCKET_NAME" \
126-
--key "sample-file.txt" \
127-
"downloaded-sample-file.txt"
82+
--key "sample.txt" \
83+
"downloaded-sample.txt"
12884
```
12985

130-
The command downloads the object and saves it as `downloaded-sample-file.txt` in your current directory. The output provides metadata about the object:
86+
The command downloads the object and saves it as `downloaded-sample.txt` in your current directory. The output provides metadata about the object:
13187

13288
```
13389
{
13490
"AcceptRanges": "bytes",
135-
"LastModified": "Thu, 22 May 2025 20:39:53 GMT",
136-
"ContentLength": 43,
137-
"ETag": "\"4f4cf806569737e1f3ea064a1d4813db\"",
138-
"VersionId": "9RCg6lFF_CmB.r_YlMS8sdPBiv878gQI",
91+
"LastModified": "2026-01-13T20:39:53+00:00",
92+
"ContentLength": 75,
93+
"ETag": "\"abcd1234abcd1234abcd1234abcd1234\"",
13994
"ContentType": "binary/octet-stream",
140-
"ServerSideEncryption": "AES256",
14195
"Metadata": {}
14296
}
14397
```
14498

145-
If you just want to check if an object exists or view its metadata without downloading it:
99+
## Copy an object to a folder prefix
100+
101+
Although S3 is a flat object store, you can simulate folders by using key name prefixes. Let's copy the sample file into a `backup/` prefix:
146102

147103
```
148-
aws s3api head-object \
104+
aws s3api copy-object \
149105
--bucket "$BUCKET_NAME" \
150-
--key "sample-file.txt"
106+
--copy-source "$BUCKET_NAME/sample.txt" \
107+
--key "backup/sample.txt"
151108
```
152109

153-
This returns the same metadata information without transferring the actual object content.
110+
The response includes information about the copy operation:
154111

155-
## Organize objects with folders
112+
```
113+
{
114+
"CopyObjectResult": {
115+
"ETag": "\"abcd1234abcd1234abcd1234abcd1234\"",
116+
"LastModified": "2026-01-13T20:39:59+00:00"
117+
}
118+
}
119+
```
156120

157-
Although S3 is a flat object store, you can simulate folders by using key name prefixes. Let's create a folder structure and copy an existing object into it.
121+
## Enable versioning
158122

159-
First, create a folder by uploading an empty object with a trailing slash:
123+
Versioning helps protect against accidental deletion by keeping multiple variants of an object in the same bucket.
160124

161125
```
162-
touch empty-file.tmp
163-
aws s3api put-object \
126+
aws s3api put-bucket-versioning \
164127
--bucket "$BUCKET_NAME" \
165-
--key "favorite-files/" \
166-
--body empty-file.tmp
128+
--versioning-configuration Status=Enabled
167129
```
168130

169-
Now, copy the sample file into this folder:
131+
With versioning enabled, uploading a file with the same key creates a new version instead of overwriting the original. Let's upload a second version of the sample file:
170132

171133
```
172-
aws s3api copy-object \
134+
echo "Hello, Amazon S3! This is version 2 of the sample file." > sample.txt
135+
136+
aws s3api put-object \
173137
--bucket "$BUCKET_NAME" \
174-
--copy-source "$BUCKET_NAME/sample-file.txt" \
175-
--key "favorite-files/sample-file.txt"
138+
--key "sample.txt" \
139+
--body "sample.txt"
176140
```
177141

178-
The response includes information about the copy operation:
142+
The response now includes a `VersionId`:
179143

180144
```
181145
{
182-
"CopySourceVersionId": "9RCg6lFF_CmB.r_YlMS8sdPBiv878gQI",
183-
"VersionId": "rBtZnoxd0V6rPxUPDUYmPz1CzRXbIIS7",
184-
"ServerSideEncryption": "AES256",
185-
"CopyObjectResult": {
186-
"ETag": "\"4f4cf806569737e1f3ea064a1d4813db\"",
187-
"LastModified": "2025-05-22T20:39:59.000Z"
188-
}
146+
"ETag": "\"abcd1234abcd1234abcd1234abcd1234\"",
147+
"VersionId": "abcdxmpl1234abcd1234abcd1234abcd"
189148
}
190149
```
191150

192-
Let's list all objects in the bucket to see our folder structure:
151+
## Configure default encryption
152+
153+
Default encryption ensures that all objects stored in the bucket are encrypted at rest using server-side encryption with Amazon S3 managed keys (SSE-S3):
193154

194155
```
195-
aws s3api list-objects-v2 \
156+
aws s3api put-bucket-encryption \
196157
--bucket "$BUCKET_NAME" \
197-
--query 'Contents[].Key' \
198-
--output table
158+
--server-side-encryption-configuration '{
159+
"Rules": [
160+
{
161+
"ApplyServerSideEncryptionByDefault": {
162+
"SSEAlgorithm": "AES256"
163+
},
164+
"BucketKeyEnabled": true
165+
}
166+
]
167+
}'
199168
```
200169

201-
The output shows all objects, including our folder structure:
170+
## Block public access
171+
172+
Blocking public access is a security best practice that prevents objects in your bucket from being made public:
202173

203174
```
204-
------------------------------------
205-
| ListObjectsV2 |
206-
+----------------------------------+
207-
| documents/sample-document.txt |
208-
| favorite-files/ |
209-
| favorite-files/sample-file.txt |
210-
| sample-file.txt |
211-
+----------------------------------+
175+
aws s3api put-public-access-block \
176+
--bucket "$BUCKET_NAME" \
177+
--public-access-block-configuration '{
178+
"BlockPublicAcls": true,
179+
"IgnorePublicAcls": true,
180+
"BlockPublicPolicy": true,
181+
"RestrictPublicBuckets": true
182+
}'
212183
```
213184

214-
You can also list objects within a specific folder:
185+
## Add tags to your bucket
186+
187+
Tags help you categorize your AWS resources for cost allocation, access control, and organization:
215188

216189
```
217-
aws s3api list-objects-v2 \
190+
aws s3api put-bucket-tagging \
218191
--bucket "$BUCKET_NAME" \
219-
--prefix "favorite-files/" \
220-
--query 'Contents[].Key' \
221-
--output table
192+
--tagging '{
193+
"TagSet": [
194+
{
195+
"Key": "Environment",
196+
"Value": "Tutorial"
197+
},
198+
{
199+
"Key": "Project",
200+
"Value": "S3-GettingStarted"
201+
}
202+
]
203+
}'
222204
```
223205

224-
This shows only the objects within the "favorite-files" folder:
206+
Verify the tags were applied:
225207

226208
```
227-
------------------------------------
228-
| ListObjectsV2 |
229-
+----------------------------------+
230-
| favorite-files/ |
231-
| favorite-files/sample-file.txt |
232-
+----------------------------------+
209+
aws s3api get-bucket-tagging \
210+
--bucket "$BUCKET_NAME"
233211
```
234212

235-
## Add tags to your bucket
213+
```
214+
{
215+
"TagSet": [
216+
{
217+
"Key": "Environment",
218+
"Value": "Tutorial"
219+
},
220+
{
221+
"Key": "Project",
222+
"Value": "S3-GettingStarted"
223+
}
224+
]
225+
}
226+
```
236227

237-
Tags help you categorize your AWS resources for cost allocation, access control, and organization:
228+
## List objects and versions
229+
230+
List all objects in the bucket to see your folder structure:
238231

239232
```
240-
aws s3api put-bucket-tagging \
241-
--bucket "$BUCKET_NAME" \
242-
--tagging 'TagSet=[{Key=Project,Value=S3Tutorial},{Key=Environment,Value=Demo}]'
233+
aws s3api list-objects-v2 \
234+
--bucket "$BUCKET_NAME"
235+
```
236+
237+
Since versioning is enabled, you can also list all versions of objects in the bucket. This shows both the current and previous versions of `sample.txt`:
238+
239+
```
240+
aws s3api list-object-versions \
241+
--bucket "$BUCKET_NAME"
243242
```
244243

245244
## Clean up resources
@@ -250,20 +249,30 @@ For buckets with versioning enabled, you need to delete all object versions befo
250249

251250
```
252251
# Delete all object versions
253-
VERSIONS=$(aws s3api list-object-versions --bucket "$BUCKET_NAME" --query 'Versions[].{Key:Key,VersionId:VersionId}' --output json)
254-
if [ -n "$VERSIONS" ] && [ "$VERSIONS" != "null" ]; then
255-
echo "{\"Objects\": $VERSIONS}" > versions.json
256-
aws s3api delete-objects --bucket "$BUCKET_NAME" --delete file://versions.json
257-
rm versions.json
258-
fi
252+
aws s3api list-object-versions \
253+
--bucket "$BUCKET_NAME" \
254+
--query "Versions[].{Key:Key,VersionId:VersionId}" \
255+
--output text | while IFS=$'\t' read -r KEY VERSION_ID; do
256+
if [ -n "$KEY" ] && [ "$KEY" != "None" ]; then
257+
aws s3api delete-object \
258+
--bucket "$BUCKET_NAME" \
259+
--key "$KEY" \
260+
--version-id "$VERSION_ID"
261+
fi
262+
done
259263
260264
# Delete all delete markers
261-
MARKERS=$(aws s3api list-object-versions --bucket "$BUCKET_NAME" --query 'DeleteMarkers[].{Key:Key,VersionId:VersionId}' --output json)
262-
if [ -n "$MARKERS" ] && [ "$MARKERS" != "null" ]; then
263-
echo "{\"Objects\": $MARKERS}" > markers.json
264-
aws s3api delete-objects --bucket "$BUCKET_NAME" --delete file://markers.json
265-
rm markers.json
266-
fi
265+
aws s3api list-object-versions \
266+
--bucket "$BUCKET_NAME" \
267+
--query "DeleteMarkers[].{Key:Key,VersionId:VersionId}" \
268+
--output text | while IFS=$'\t' read -r KEY VERSION_ID; do
269+
if [ -n "$KEY" ] && [ "$KEY" != "None" ]; then
270+
aws s3api delete-object \
271+
--bucket "$BUCKET_NAME" \
272+
--key "$KEY" \
273+
--version-id "$VERSION_ID"
274+
fi
275+
done
267276
```
268277

269278
After deleting all object versions, you can delete the bucket:
@@ -275,7 +284,7 @@ aws s3api delete-bucket --bucket "$BUCKET_NAME"
275284
Don't forget to clean up local files:
276285

277286
```
278-
rm -f sample-file.txt sample-document.txt downloaded-sample-file.txt empty-file.tmp
287+
rm -f sample.txt downloaded-sample.txt
279288
```
280289

281290
## Next steps

0 commit comments

Comments
 (0)