This guide shows how to manage
large data efficiently in
Spring Boot applications using the
Apache Derby database.
1. Configure Derby for Large Data
In
application.properties
:
spring.datasource.url=jdbc:derby:memory:bigDB;create=true
spring.datasource.driver-class-name=org.apache.derby.jdbc.EmbeddedDriver
spring.jpa.hibernate.ddl-auto=update
spring.jpa.properties.hibernate.jdbc.batch_size=50
hibernate.jdbc.batch_size=50
→ improves performance by batching inserts.
2. Define Entity with Large Fields
import jakarta.persistence.*;
@Entity
public class Document {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String title;
@Lob
private String content; // large text storage
// getters and setters
}
@Lob
ensures Derby stores large text (CLOB).
3. Repository
import org.springframework.data.jpa.repository.JpaRepository;
public interface DocumentRepository extends JpaRepository<Document, Long> {
}
4. Controller for Upload and Fetch
import org.springframework.web.bind.annotation.*;
import java.util.List;
@RestController
@RequestMapping("/documents")
public class DocumentController {
private final DocumentRepository repo;
public DocumentController(DocumentRepository repo) {
this.repo = repo;
}
@PostMapping
public Document add(@RequestBody Document doc) {
return repo.save(doc);
}
@GetMapping
public List<Document> all() {
return repo.findAll();
}
}
5. Use Streaming for Exports
For very large datasets, fetch with streaming:
import org.springframework.data.jpa.repository.Query;
import org.springframework.data.stream.Streamable;
public interface DocumentRepository extends JpaRepository<Document, Long> {
@Query("SELECT d FROM Document d")
Streamable<Document> streamAll();
}
This avoids loading everything into memory at once.
6. Test Large Inserts
Run the app and insert large text:
curl -X POST http://localhost:8080/documents \
-H "Content-Type: application/json" \
-d '{"title":"Big File","content":"'$(head -c 50000 /dev/urandom | base64)'"}'
image quote pre code