Advanced Testing Methods
- Waiting for Vaadin
- Waiting for Met Conditions
- Scrolling
- Profiling Test Execution Time
- Running Tests in Parallel
- Parameterized Tests
There are a few advanced testing methods of which you should be aware: waiting for Vaadin; waiting until a particular condition is met; scrolling; profiling test execution time; and running tests in parallel.
These testing methods aren’t always needed, though. For example, situations in which you might need to disable automatic waiting or scrolling in a view are rare. In such cases, you’ve probably encountered a bug in the software. Nevertheless, these testing methods are explained here for when they are needed.
Waiting for Vaadin
Web pages are typically loaded and rendered immediately by the browser. In such applications, you can test the page elements immediately after the page is loaded. In Vaadin and other Single-Page Applications (SPAs), rendering is done by JavaScript code, asynchronously. Therefore, you need to wait until the server has given its response to an AJAX request and the JavaScript code finishes rendering the UI.
A major advantage of using TestBench compared to other testing solutions is that it knows when something is still being rendered on the page. It waits for rendering to finish before proceeding with the test.
Usually, this isn’t something you need to consider since waiting is automatically enabled. However, it might be necessary to disable it sometimes. You can do that by calling disableWaitForVaadin()
in the TestBenchCommands
interface like this:
testBench(driver).disableWaitForVaadin();
When "waiting for rendering to finish" has been disabled, you can wait for it to finish by calling waitForVaadin()
, explicitly:
testBench(driver).waitForVaadin();
You can re-enable waiting in the same interface with enableWaitForVaadin()
.
Waiting for Met Conditions
In addition to waiting for Vaadin, it’s also possible to wait until a condition is met. For example, you might want to wait until an element is visible on the web page. That might be done like so:
waitUntil(ExpectedConditions.presenceOfElementLocated(By.id("first")));
This call waits until the specified element is present, or times out after waiting for ten seconds, by default. The waitUntil(condition, timeout)
allows the timeout duration to be controlled.
Scrolling
To be able to interact with an element, it needs to be visible on the screen. This limitation is set so that tests which are run using a WebDriver simulate a normal user as much as possible. TestBench handles this automatically by ensuring that an element is in view before an interaction is triggered.
Sometimes, you might want to disable this behavior. You can do so with TestBenchCommands.setAutoScrollIntoView(false)
.
Profiling Test Execution Time
You might not only be interested in the fact that an application works, but also how long it takes. Profiling the test execution time consistency isn’t trivial. A test environment can have different kinds of latency and interference.
For example, in a distributed setup, timing results taken on the test server would include the latencies among the test server, the grid hub, a grid node running the browser, and the web server running the application. In such a setup, you could also expect interference among multiple test nodes, which all might make requests to a shared application server and possibly also shared virtual machine resources.
Furthermore, in Vaadin applications there are two sides which need to be profiled: the server side, on which the application logic is executed; and the client side, where it’s rendered in the browser. Vaadin TestBench includes methods for measuring execution time both on the server side and the client side.
The TestBenchCommands
interface offers the following methods for profiling test execution time:
totalTimeSpentServicingRequests()
-
This returns the total time in milliseconds spent servicing requests in the application on the server side. The timer starts when you first navigate to the application and hence start a new session. The time passes only when servicing requests for the particular session.
If you’re also interested in the client-side performance for the last request, you must call
timeSpentRenderingLastRequest()
before calling this method. It’s necessary because this method makes an extra server request, which causes an empty response to be rendered. timeSpentServicingLastRequest()
-
This will return the time in milliseconds spent servicing the last request in the application on the server side. Not all user interaction through the WebDriver causes server requests.
As with the total, if you’re also interested in the client-side performance for the last request, you must call
timeSpentRenderingLastRequest()
before calling this method. totalTimeSpentRendering()
-
This method returns the total time in milliseconds spent rendering the user interface of the application on the client side, that is, in the browser. This time only passes when the browser is still rendering after interacting with it through the WebDriver.
timeSpentRenderingLastRequest()
-
This returns the time in milliseconds spent rendering user interface of the application after the last server request. Not all user interaction through the WebDriver causes server requests.
If you also call
timeSpentServicingLastRequest()
ortotalTimeSpentServicingRequests()
, you should do so before calling this method. These methods cause a server request, which zeros the rendering time measured by this method.
The following example is given in the VerifyExecutionTimeITCase.java
file in the TestBench demo:
@Test
public void verifyServerExecutionTime() throws Exception {
// Get start time on the server-side
long currentSessionTime = testBench(getDriver())
.totalTimeSpentServicingRequests();
// Interact with the application
calculateOnePlusTwo();
// Calculate the passed processing time on the serve-side
long timeSpentByServerForSimpleCalculation =
testBench().totalTimeSpentServicingRequests() -
currentSessionTime;
// Report the timing
System.out.println("Calculating 1+2 took about "
+ timeSpentByServerForSimpleCalculation
+ "ms in servlets service method.");
// Fail if the processing time was critically long
if (timeSpentByServerForSimpleCalculation > 30) {
fail("Simple calculation shouldn't take " +
timeSpentByServerForSimpleCalculation + "ms!");
}
// Do the same with rendering time
long totalTimeSpentRendering =
testBench().totalTimeSpentRendering();
System.out.println("Rendering UI took "
+ totalTimeSpentRendering + "ms");
if (totalTimeSpentRendering > 400) {
fail("Rendering UI shouldn't take "
+ totalTimeSpentRendering + "ms!");
}
// A normal assertion on the UI state
assertEquals("3.0",
$(TextFieldElement.class).first()
.getValue());
}
Running Tests in Parallel
TestBench supports parallel tests execution using its own test runner (JUnit 4) or native JUnit 5 parallel execution.
Up to fifty test methods are executed simultaneously by default. The limit can be set using the com.vaadin.testbench.Parameters.testsInParallel
system property.
When running tests in parallel, you need to ensure that the tests are independent and don’t affect each other in any way.
Extending ParallelTest (JUnit 4)
Usually, you will probably want to configure something for all of your tests. It makes sense, therefore, to create a common superclass. For example, you might use public abstract class AbstractIT extends ParallelTest
.
If your tests don’t work in parallel, set the com.vaadin.testbench.Parameters.testsInParallel
to 1
.
Using Native JUnit 5 Parallel Execution
To run tests in parallel, extend the TestBench utility class BrowserTestBase
or manually annotate test classes with @Execution(ExecutionMode.CONCURRENT)
.
To disable parallel execution, annotate the test class with @Execution(ExecutionMode.SAME_THREAD)
.
Accessing WebDriver & More Test Information
Using JUnit 5, it is possible to access additional test information in a method annotated with @Test
, @BeforeEach
, @AfterEach
, @BeforeAll
, or @AfterAll
by adding the BrowserTestInfo
method parameter. Here’s an example of this:
@BeforeEach
public void setWebDriverAndCapabilities(BrowserTestInfo browserTestInfo) {
// customize driver if needed
setDriver(browserTestInfo.driver());
// access browser capabilities
this.capabilities = browserTestInfo.capabilities();
}
BrowserTestInfo
contains information about the following:
-
WebDriver
and browser capabilities used for current test execution; -
Hostname of the hub for remote execution; and
-
Browser name and version used for local execution.
Parameterized Tests
Parameterized tests is a JUnit feature that make it possible to run a test multiple times with different arguments. It is available both in JUnit 4 and JUnit 5, and Testbench supports it, but the test setup is slightly different.
In JUnit 4, the test class must use the Parameterized
runner and you provide parameters that can be injected into the class constructor or public fields.
@RunWith(Parameterized.class)
public class MyTestClass extends TestBenchTestCase {
@Parameterized.Parameters
public static Iterable<String> data() {
return List.of("first", "second");
}
private final String parameter;
public MyTestClass(String parameter) {
this.parameter = parameter;
}
@Test
public void myTestMethod() {
getDriver().get("http://localhost:8080/" + parameter);
}
@Before
public void setup() {
setDriver(new ChromeDriver());
}
@After
public void tearDown() {
getDriver().quit();
}
}
With JUnit 5 the tests are declared as regular test methods but using the @ParameterizedTest
annotation instead of @Test
. Parameters are injected as method arguments. Unfortunately, currently using @ParameterizedTest
in combination with other Test Templates, like @BrowserTest
, may not produce the desired effects, because every generated test is aware only of the features provided by its generator.
To better clarify, look at the following, albeit not working example code:
class MyTestClass extends BrowserTestBase {
@BrowserTest
@ParameterizedBrowserTest
@ValueSource(strings = { "first", "second" })
void myTestMethod(String parameter) {
getDriver().get("http://localhost:8080/" + parameter);
}
}
The expectation might be that the test should run twice, opening the browser at the requested URL, first http://localhost:8080/first
and then http://localhost:8080/second
. However, what happens is that the execution produces three failures: two because the @BrowserTest
initialization is not performed (No ParameterResolver registered for parameter [com.vaadin.testbench.browser.BrowserTestInfo arg0]), and one because the parameter value cannot be injected (No ParameterResolver registered for parameter [java.lang.String param]).
For further informaton, you can look at the JUnit issues reporting the problem, and the related Feature request ticket.
To circumvent this limitation, Testbench introduced the @ParameterizedBrowserTest
annotation. It is a specialization @BrowserTest
that supports parameter injection, in the exactly same way as when using @ParameterizedTest
.
Below is an example on how to implement a parameterized browser tests:
@RunLocally(Browser.CHROME) 1
class MyTestClass extends BrowserTestBase {
@ParameterizedBrowserTest 2
@ValueSource(strings = { "first", "second" }) 3
void myTestMethod(String parameter) {
getDriver().get("http://localhost:8080/" + parameter);
}
}
-
Define which browser should be used for the parameterized tests.
-
Mark the method as a parameterized browser test.
-
Provide sources for method parameters,
Parameterized Tests on Multiple Browsers
To run parameterized tests on multiple local browsers you need to implement the tests in a base abstract class and then create a subclass for each browser, annotating it with @RunLocally
. With JUnit 4, the base test class inherits from ParallelTest
to make Testbench take care of creating and destroying driver instances.
abstract class AbstractParameterizedTest extends BrowserTestBase {
@ParameterizedBrowserTest
@ValueSource(strings = { "first", "second" })
void myTestMethod(String parameter) {
getDriver().get("http://localhost:8080/" + parameter);
}
}
@RunLocally(Browser.CHROME)
class ChromeParameterizedIT extends AbstractParameterizedTest {
}
@RunLocally(Browser.FIREFOX)
class FirefoxParameterizedIT extends AbstractParameterizedTest {
}
Parameterized tests can also run on multiple remote browsers, using a similar setup. The main difference is that the base class should be annotated with @RunOnHub
, and the subclasses should have a method annotated with @BrowserConfiguration
that returns a List
containing a single DesiredCapabilities
item. Note that the subclasses must have public
visibility to work with @BrowserConfiguration
annotation.
@RunOnHub("hub.testgrid.mydomain.com")
abstract class AbstractParameterizedTest extends BrowserTestBase {
@ParameterizedBrowserTest
@ValueSource(strings = { "first", "second" })
void myTestMethod(String parameter) {
getDriver().get("http://localhost:8080/" + parameter);
}
}
public class ChromeParameterizedIT extends AbstractParameterizedTest {
@BrowserConfiguration
public List<DesiredCapabilities> browserConfig(){
List<DesiredCapabilities> capabilities = new ArrayList<>();
capabilities.add(Browser.CHROME.getDesiredCapabilities());
return capabilities;
}
}
public class FirefoxParameterizedIT extends AbstractParameterizedTest {
@BrowserConfiguration
public List<DesiredCapabilities> browserConfig(){
List<DesiredCapabilities> capabilities = new ArrayList<>();
capabilities.add(Browser.FIREFOX.getDesiredCapabilities());
return capabilities;
}
}
9F6A7015-9AD8-43DC-AC68-CC6D66C5212F