Imagine the following situation: you created a superb app suiting your use cases perfectly, loads of unit tests, and even a pile of well-designed integration tests. Seemingly nothing to worry about, however... Your integration tests also run on your test database and every time you run your test suite, your database contains more and more data, it will be more and more contaminated, contamination that is also visible in your application. Not very convenient.
There are different ways to solve this inconvenience. The first option is to drop and recreate your database regularly. It's not very convenient since the test data you specifically crafted for the use case you are currently working on will also be gone every time...
Option number two is cleaning up the data you create, which is not a bad option for simple tests and simple applications, but can be very cumbersome if your application grows bigger and your logic becomes more and more complicated.
A third option could be spinning up a special database, only for your integration tests. As we don't like lots of configuration, and we do love out-of-the-box functionality, we look at an embedded database like H2, HSQLDB, or Derby. All possible good solutions, BUT no real replacements for your production-grade database with lots of bells and whistles. These replacements do not implement the complete SQL syntax (in the same way as your production database does), they are not scaled correctly and they behave differently, especially in the corner cases, which are typically the cases we are interested in when testing. That means, again, back to the drawing board...
Well, then let's use a designated real database, hosted on a server, reserved for integration tests only. We can clean it after each run and it will allow us to reuse the same database every time. This approach however requires lots of setup and configuration work, and I can only imagine what will happen when two developers try running their test simultaneously.
Wouldn't it be convenient if we could spin up a private database for each test run? A database with the same bells and whistles as the production database? In fact, exactly the same database as the production database? I already hear you thinking about the huge DevOps and configuration hell and the fights with the infra dudes... But here it comes: test containers to the rescue!
WHAT ARE TEST CONTAINERS?
Testcontainers brings a solution to start (and of course stop) a Docker container when running integration tests. That container can then be used as a dependency for your tests. The technique is not limited to databases: it can also be used for message brokers, object storage systems, user identification systems, Kubernetes, or even your own custom-made container!
This technique can be equally handy when your application has dependencies on external systems. If this is the case, test containers can ensure that these dependencies are available and behave in a deterministic way. Another advantage is that every test run has its own resources, every test run starts with a clean slate, even when running the same testing suite in parallel.
The general idea is that you specify the containers you need in your application. When the integration test starts, first an internal container (known as the ryuk-container) is started to manage the system. Then all needed containers are started and the test waits until all containers are started. After all the tests have concluded, and even when something has gone wrong, the internal container makes sure everything is cleaned up nicely.
The downside of using test containers is that you need a running Docker engine on your test machine. So before you start implementing test containers, you should check if Docker is available on the build machines of your CI/CD pipeline... Remember, we don't want to fight the infra dudes....
Using test containers
Enough theory, it's time to roll up your sleeves now. Imagine we have an application to record todo's, our imaginary application has three imaginary Rest-endpoints: a POST to create a todo, a DELETE to remove a todo, and a GET to retrieve all todo's...
Hold, let's stop imagining... Take the Spring Initializr and create a project, add Spring Data JPA, Spring Web, Lombok, and the MySQL Drives as dependencies.
Create a Todo Entity as follows:
@Data
@Entity
public class Todo {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String description;
}
Create a repository:
public interface TodoRepository extends JpaRepository<Todo, Long> {
}
Turn on the table creation and connect to a MySQL in application.yaml:
jump:
jpa:
hibernate:
ddl car: update
datasource:
url: jdbc:mysql://localhost:3306/todododb
username: user
password: password
Before you forget: create a MySQL database with the following docker-compose file (save as docker-compose.yml)
version: '3.3'
services:
db:
image: mysql:8.0
restart: always
environment:
MYSQL_DATABASE: 'tododb'
MYSQL_USER: 'user'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- my-db:/var/lib/mysql
volumes:
my-db:
and run with docker-compose up.
Finally let us create a RestController:
@AllArgsConstructor
@RestController
@RequestMapping("/todos")
public class TodoController {
private final TodoRepository todoRepository;
@PostMapping
@ResponseStatus(HttpStatus.CREATED)
public Todo post(@RequestBody Todo todo) {
todo.setId(null);
return todoRepository.save(todo);
}
@GetMapping
public List<Todo> get() {
return todoRepository.findAll();
}
@DeleteMapping("/{id}")
@ResponseStatus(HttpStatus.NO_CONTENT)
public void delete(@PathVariable Long id) {
todoRepository.deleteById(id);
}
}
Now spin up your application and make sure it is working.
Let's write some integration tests:
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class TodoControllerIT {
@Autowired
private TestRestTemplate restTemplate;
@Autowired
private TodoRepository todoRepository;
@Test
void post() {
Todo todo = new Todo();
todo.setDescription("Description");
ResponseEntity<Todo> result = restTemplate.exchange(“/todos”, HttpMethod.POST, new HttpEntity<>(todo), new ParameterizedTypeReference<>() {
});
assertThat(result).isNotNull();
assertThat(result.getStatusCode()).isEqualTo(HttpStatus.CREATED);
assertThat(result.getBody()).isNotNull();
// Additional assertions
}
@Test
void get() {
ResponseEntity<List<Todo>> result = restTemplate.exchange(“/todos”, HttpMethod.GET, null, new ParameterizedTypeReference<>() {
});
assertThat(result).isNotNull();
assertThat(result.getStatusCode()).isEqualTo(HttpStatus.OK);
assertThat(result.getBody()).isNotNull();
// Additional assertions
}
@Test
void delete() {
Todo todo = new Todo();
todo.setDescription("Description");
todo = todoRepository.save(todo);
ResponseEntity<Todo> result = restTemplate.exchange(“/todos/” + todo.getId(), HttpMethod.DELETE, null, new ParameterizedTypeReference<>() {
});
assertThat(result).isNotNull();
assertThat(result.getStatusCode()).isEqualTo(HttpStatus.OK);
// Additional assertions
}
}
When you now run your test, it is run against the database in the Docker container defined above. You can easily check this by looking into the todo table, you will see the number of lines increasing with each test run.
One separate test container for each test class
Next, we are going to couple a test container to the integration test class. The first thing to do is add the needed dependencies: we need the test container dependency for jupiter and a database-specific dependency. In our application, this becomes:
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>junit-jupiter</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>mysql</artifactId>
<scope>test</scope>
</dependency>
You can go without the MySQL dependency, but in that case, you'll have to manually configure the container using a GenericContainer class. Let's use the MySQL dependency for convenience....
First, we need to specify the image we want to use, this is done with
@Container
private static final MySQLContainer<?> mysql = new MySQLContainer<>(“mysql:8.0”);
The @Container annotation tells the system the image should be run as a test container.
Next, we need to tell Spring that the Container should start at the beginning of the test class and stop at the end. To accomplish this, simply add the annotation @TestContainers at the class level.
Finally, since the connection parameters of the database are only known after the container has started up, you need to inject them into the Spring configuration using
@DynamicPropertySource
static void configureProperties(DynamicPropertyRegistry registry) {
registry.add("spring.datasource.url", mysql::getJdbcUrl);
registry.add("spring.datasource.username", mysql::getUsername);
registry.add("spring.datasource.password", mysql::getPassword);
}
So the complete integration test becomes:
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
class TodoControllerIT {
@Autowired
private TestRestTemplate restTemplate;
@Autowired
private TodoRepository todoRepository;
@Container
private static final MySQLContainer<?> mysql = new MySQLContainer<>(“mysql:8.0”);
@DynamicPropertySource
static void configureProperties(DynamicPropertyRegistry registry) {
registry.add("spring.datasource.url", mysql::getJdbcUrl);
registry.add("spring.datasource.username", mysql::getUsername);
registry.add("spring.datasource.password", mysql::getPassword);
}
@Test
void post() { ... }
@Test
void get() { ... }
@Test
void delete() { ... }
}
If you now run your integration test, you will no longer see the increase of the occurrences in the todo table.
Ah right, did I mention that you need to start the Docker engine on your testing machine? So if your tests fail with a vague container-related error message, check if the Docker engine is running...
When we look in the logging we see the container starting up
10:21:59.414 [main] INFO tc.mysql:8.0 - Creating container for image: mysql:8.0
10:21:59.482 [main] INFO tc.mysql:8.0 - Container mysql:8.0 is starting: 3659e5fdfb479f5d2f64856d04ea409f6ada1398bda5db59c5daafb7a8d52ad9
10:21:59.736 [main] INFO tc.mysql:8.0 - Waiting for database connection to become available at jdbc:mysql://localhost:56181/test using query 'SELECT 1'
10:22:10.974 [main] INFO tc.mysql:8.0 - Container mysql:8.0 started in PT11.560164S
10:22:10.974 [main] INFO tc.mysql:8.0 - Container is started (JDBC URL: jdbc:mysql://localhost:56181/test)
but we see this for every test class the test container is used in. We see the container being created, being started, and then the application waiting until the container has been started.
With this setup, when we run multiple test classes in our test suite, a separate container is created for each class, which has to start and stop over and over. This can result in a long runtime for the integration tests. Luckily, we can easily solve that.
The same container for all test classes
As mentioned before, a separate container is run for all test classes, resulting in a quite considerable increase in testing time compared to running without test containers.
You can, with a relatively small change, configure the container to be reused for all tests.
First, create an abstract class with the configuration in it
abstract class CommonControllerIT {
private static final MySQLContainer<?> mysql = new MySQLContainer<>(“mysql:8.0”);
@DynamicPropertySource
static void configureProperties(DynamicPropertyRegistry registry) {
registry.add("spring.datasource.url", mysql::getJdbcUrl);
registry.add("spring.datasource.username", mysql::getUsername);
registry.add("spring.datasource.password", mysql::getPassword);
}
@BeforeAll
static void beforeAll() {
mysql.start();
}
}
Note that the annotations are no longer used. The annotations make sure that the container is started and stopped after each test class, which is exactly the behavior we don't want. We now have to start the container before we run the tests, which is done with the mysql.start() command.
The @BeforeAll makes sure that the start command is run at the beginning of each test class, but since we never stop the container (with mysql.stop()) it is reused for each test class implementing our abstract class. At the end when all tests are finished the complete container is stopped and cleaned up by the test container system itself.
The only thing left to do is to let your test classes (though only those who need this test container) inherit from the abstract class and potentially remove all test container-related configuration from them.
When we now look into the logging we will see the container starting only once, even if we have multiple testing classes using the container.
This concludes this post about the wonderful technique of test containers, I hope you enjoyed reading this post and have lots of fun implementing and using test containers in the future.
Cookie | Duration | Description |
---|---|---|
__cfruid | session | Cloudflare sets this cookie to identify trusted web traffic. |
cookielawinfo-checkbox-advertisement | 1 year | Set by the GDPR Cookie Consent plugin, this cookie records the user consent for the cookies in the "Advertisement" category. |
cookielawinfo-checkbox-analytics | 1 year | Set by the GDPR Cookie Consent plugin, this cookie records the user consent for the cookies in the "Analytics" category. |
cookielawinfo-checkbox-necessary | 1 year | Set by the GDPR Cookie Consent plugin, this cookie records the user consent for the cookies in the "Necessary" category. |
cookielawinfo-checkbox-others | 1 year | Set by the GDPR Cookie Consent plugin, this cookie stores user consent for cookies in the category "Others". |
cookielawinfo-checkbox-preferences | 1 year | CookieYes set this cookie to record the user consent for the cookies in the category "Functional". |
CookieLawInfoConsent | 1 year | CookieYes sets this cookie to record the default button state of the corresponding category and the status of CCPA. It works only in coordination with the primary cookie. |
elementor | never | The website's WordPress theme uses this cookie. It allows the website owner to implement or change the website's content in real-time. |
viewed_cookie_policy | 1 year | The GDPR Cookie Consent plugin sets the cookie to store whether or not the user has consented to use cookies. It does not store any personal data. |
Cookie | Duration | Description |
---|---|---|
_ga | 1 year 1 month 4 days | Google Analytics sets this cookie to calculate visitor, session and campaign data and track site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognise unique visitors. |
_ga_* | 1 year 1 month 4 days | Google Analytics sets this cookie to store and count page views. |
_hole_UA-* | 1 minute | Google Analytics sets this cookie for user behavior tracking. |
_gid | 1 day | Google Analytics sets this cookie to store information on how visitors use a website while also creating an analytics report of the website's performance. Some of the collected data includes the number of visitors, their source, and the pages they visit anonymously. |
AnalyticsSyncHistory | 1 month | Linkedin set this cookie to store information about the time a sync took place with the lms_analytics cookie. |
CONSENT | 2 years | YouTube sets this cookie via embedded YouTube videos and registers anonymous statistical data. |
ln_or | 1 day | Linkedin sets this cookie to registers statistical data on users' behavior on the website for internal analytics. |
Cookie | Duration | Description |
---|---|---|
_rdt_uuid | 3 months | Reddit sets this cookie to build a profile of your interests and show you relevant ads. |
bcookie | 1 year | LinkedIn sets this cookie from LinkedIn share buttons and ad tags to recognize browser IDs. |
bscookie | 1 year | LinkedIn sets this cookie to store performed actions on the website. |
VISITOR_INFO1_LIVE | 5 months 27 days | YouTube sets this cookie to measure bandwidth, determining whether the user gets the new or old player interface. |
YSC | session | Youtube sets this cookie to track the views of embedded videos on Youtube pages. |
yt-remote-connected-devices | never | YouTube sets this cookie to store the user's video preferences using embedded YouTube videos. |
yt-remote-device-id | never | YouTube sets this cookie to store the user's video preferences using embedded YouTube videos. |
yt.innertube::nextId | never | YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen. |
yt.innertube::requests | never | YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen. |
Cookie | Duration | Description |
---|---|---|
li_gc | 5 months 27 days | Linkedin set this cookie for storing visitor's consent regarding using cookies for non-essential purposes. |
lidc | 1 day | LinkedIn sets the lidc cookie to facilitate data center selection. |
UserMatchHistory | 1 month | LinkedIn sets this cookie for LinkedIn Ads ID syncing. |
Cookie | Duration | Description |
---|---|---|
road_lot_wp_rocket_cache | session | Description is currently not available. |