Struts is a solid Java web framework, and adding Amazon S3 lets you handle file uploads, downloads, and storage effortlessly. Whether you're testing locally or go live, this guide walks you through integrating S3 into Struts. We’ll use LocalStack for local Docker development and real AWS S3 credentials for production with Kubernetes—keeping configurations clean, secure, and environment-specific.
1. Add the Right Dependencies to pom.xml
<dependencies>> <!-- Struts framework -->
<dependency>
<groupId>org.apache.struts</groupId>
<artifactId>struts2-core</artifactId>
<version>${struts.version}</version>
</dependency>
<!-- AWS SDK for S3 -->
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>s3</artifactId>
<version>${aws.sdk.version}</version>
</dependency>
<!-- Kubernetes client for production config -->
<dependency>
<groupId>io.fabric8</groupId>
<artifactId>kubernetes-client</artifactId>
<version>${fabric8.version}</version>
</dependency>
</dependencies>
You get Struts for the web layer, AWS SDK for S3 access, and Kubernetes client support for managing deployments.
2. Run LocalStack via Docker for Development
LocalStack acts like fake AWS, ideal for local testing:
version: '3'
services:
localstack:
image: localstack/localstack:s3-latest
environment:
- SERVICES=s3
- AWS_ACCESS_KEY_ID=test
- AWS_SECRET_ACCESS_KEY=test
ports:
- "4566:4566"
volumes:
- "./init-s3.sh:/etc/localstack/init/ready.d/init-s3.sh"
Create
init-s3.sh
to initialize a test bucket:
#!/usr/bin/env bash
awslocal s3api create-bucket --bucket struts-dev-bucket
Start it:
docker-compose up -d
You now have a local S3-compatible endpoint at
http://localhost:4566
with your bucket ready
3. Development Configuration: s3-dev.properties
Create this file in
src/main/resources
:
s3.endpoint=http://localhost:4566
s3.region=us-east-1
s3.bucket=struts-dev-bucket
s3.accessKey=test
s3.secretKey=test
Use it in Java like this:
AwsBasicCredentials creds = AwsBasicCredentials.create(
props.getProperty("s3.accessKey"),
props.getProperty("s3.secretKey")
);
S3Client client = S3Client.builder()
.endpointOverride(URI.create(props.getProperty("s3.endpoint")))
.region(Region.of(props.getProperty("s3.region")))
.credentialsProvider(StaticCredentialsProvider.create(creds))
.forcePathStyle(true)
.build();
The
.forcePathStyle(true)
is important for LocalStack path-style addressing.
4. Prepare AWS S3 Configuration for Production
Store real S3 settings in Kubernetes:
kubectl create secret generic aws-secret \
--from-literal=AWS_ACCESS_KEY_ID=yourKey \
--from-literal=AWS_SECRET_ACCESS_KEY=yourSecret
kubectl create configmap aws-config \
--from-literal=S3_BUCKET=struts-prod-bucket \
--from-literal=S3_REGION=us-east-1
5. Production Configuration: s3-prod.properties
Include this in your resources:
s3.endpoint=https://s3.${S3_REGION}.amazonaws.com
s3.region=${S3_REGION}
s3.bucket=${S3_BUCKET}
s3.accessKey=${AWS_ACCESS_KEY_ID}
s3.secretKey=${AWS_SECRET_ACCESS_KEY}
No code changes required—just environment variables differ.
6. Kubernetes Deployment Snippet
Add to your Deployment spec:
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-secret
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-secret
key: AWS_SECRET_ACCESS_KEY
- name: S3_BUCKET
valueFrom:
configMapKeyRef:
name: aws-config
key: S3_BUCKET
- name: S3_REGION
valueFrom:
configMapKeyRef:
name: aws-config
key: S3_REGION
Struts app will automatically pick up AWS credentials and bucket info.
7. Example Code for Upload in Struts Action
public class S3Action extends ActionSupport {
public String execute() {
Properties p = loadProps("s3-" + (System.getenv("S3_BUCKET")!=null ? "prod" : "dev") + ".properties");
S3Client s3 = buildClient(p);
s3.putObject(PutObjectRequest.builder()
.bucket(p.getProperty("s3.bucket"))
.key("example.txt")
.build(),
RequestBody.fromString("Hello from Struts!"));
addActionMessage("Uploaded to S3 bucket " + p.getProperty("s3.bucket"));
return SUCCESS;
}
}
Same code works in development and production.
image quote pre code