While the test body above is concise, the reader needs to do some mental computation to understand it, e.g., by following the flow of self.users from setUp() through _RegisterAllUsers(). Since tests don't have tests, it should be easy for humans to manually inspect them for correctness, even at the expense of greater code duplication. This means that the DRY principle often isn’t a good fit for unit tests, even though it is a best practice for production code.

In tests we can use the DAMP principle (“Descriptive and Meaningful Phrases”), which emphasizes readability over uniqueness. Applying this principle can introduce code redundancy (e.g., by repeating similar code), but it makes tests more obviously correct. Let’s add some DAMP-ness to the above test:

def setUp(self):
  self.forum = Forum()

def testCanRegisterMultipleUsers(self):
  # Create the users in the test instead of relying on users created in setUp.
  user1 = User('alice')
  user2 = User('bob')

  # Register the users in the test instead of in a helper method, and don't use a for-loop.
  self.forum.Register(user1)
  self.forum.Register(user2)

  # Assert each user individually instead of using a for-loop.
  self.assertTrue(self.forum.HasRegisteredUser(user1))
  self.assertTrue(self.forum.HasRegisteredUser(user2))

Note that the DRY principle is still relevant in tests; for example, using a helper function for creating value objects can increase clarity by removing redundant details from the test body. Ideally, test code should be both readable and unique, but sometimes there’s a trade-off. When writing unit tests and faced with a choice between the DRY and DAMP principles, lean more heavily toward DAMP.

  • DON’T: Criticize the person. Instead, discuss the code. Even the perception that a comment is about a person (e.g., due to using “you” or “your”) distracts from the goal of improving the code.
    Reviewer Don’t:
    Why are you using this approach?
    You’re adding unnecessary
    complexity.
    
    Reviewer Do:
    This concurrency model appears to
    be adding complexity to the
    system without any visible
    performance benefit.
  • DON’T: Use harsh language. Code review comments with a negative tone are less likely to be useful. For example, prior research found very negative comments were considered useful by authors 57% of the time, while more-neutral comments were useful 79% of the time.  

  • As a Reviewer:
    • DO: Provide specific and actionable feedback. If you don’t have specific advice, sometimes it’s helpful to ask for clarification on why the author made a decision.
      Reviewer Don’t:
      I don’t understand this.
      
      Reviewer Do:
      If this is an optimization, can you
      please add comments?
    • DO: Clearly mark nitpicks and optional comments by using prefixes such as ‘Nit’ or ‘Optional’. This allows the author to better gauge the reviewer’s expectations.

    As an Author:
    • DO: Clarify code or reply to the reviewer’s comment in response to feedback. Failing to do so can signal a lack of receptiveness to implementing improvements to the code.
      Author Don’t:
      That makes sense in some cases but
      not here.
      
      Author Do:
      I added a comment about why
      it’s implemented that way.
    • DO: When disagreeing with feedback, explain the advantage of your approach. In cases where you can’t reach consensus, follow Google’s guidance for resolving conflicts in code review.

    assertEquals("Message has been sent", getString(notification, EXTRA_BIG_TEXT));
    assertTrue(
        getString(notification, EXTRA_TEXT)
            .contains("Kurt Kluever <[email protected]>"));


    The two assertions above test almost the same thing, but they are structured differently. The difference in structure makes it hard to identify the difference in what's being tested.

    A better way to structure these assertions is to use a fluent API:

    assertThat(getString(notification, EXTRA_BIG_TEXT))
        .isEqualTo("Message has been sent");
    assertThat(getString(notification, EXTRA_TEXT))
        .contains("Kurt Kluever <[email protected]>");


    A fluent API naturally leads to other advantages:
    • IDE autocompletion can suggest assertions that fit the value under test, including rich operations like containsExactly(permission.SEND_SMS, permission.READ_SMS).
    • Failure messages can include the value under test and the expected result. Contrast this with the assertTrue call above, which lacks a failure message entirely.
    Google's fluent assertion library for Java and Android is Truth. We're happy to announce that we've released Truth 1.0, which stabilizes our API after years of fine-tuning.



    Truth started in 2011 as a Googler's personal open source project. Later, it was donated back to Google and cultivated by the Java Core Libraries team, the people who bring you Guava.

    You might already be familiar with assertion libraries like Hamcrest and AssertJ, which provide similar features. We've designed Truth to have a simpler API and more readable failure messages. For example, here's a failure message from AssertJ:

    java.lang.AssertionError:
    Expecting:
      <[year: 2019
    month: 7
    day: 15
    ]>
    to contain exactly in any order:
      <[year: 2019
    month: 6
    day: 30
    ]>
    elements not found:
      <[year: 2019
    month: 6
    day: 30
    ]>
    and elements not expected:
      <[year: 2019
    month: 7
    day: 15
    ]>


    And here's the equivalent message from Truth:

    value of:
        iterable.onlyElement()
    expected:
        year: 2019
        month: 6
        day: 30
    
    but was:
        year: 2019
        month: 7
        day: 15
    


    For more details, read our comparison of the libraries, and try Truth for yourself.

    Also, if you're developing for Android, try AndroidX Test. It includes Truth extensions that make assertions even easier to write and failure messages even clearer:


    assertThat(notification).extras().string(EXTRA_BIG_TEXT)
        .isEqualTo("Message has been sent");
    assertThat(notification).extras().string(EXTRA_TEXT)
        .contains("Kurt Kluever <[email protected]>");


    Coming soon: Kotlin users of Truth can look forward to Kotlin-specific enhancements.

        name: "HelloWorldTests",
        srcs: ["src/**/*.java"],
        sdk_version: "current",
        static_libs: ["android-support-test"],
        certificate: "platform",
        test_suites: ["device-tests"],
    }

    Note the android_test declaration at the beginning indicates this is a test. Including android_app instead would conversely indicate this is a build package. Complex test configuration options still exist for test modules requiring customized setup and tear down that cannot be performed within the test case itself.

    Mapping tests in the source tree

    Test Mapping allows developers to create pre- and post-submit test rules directly in the Android source tree and leave the decisions of branches and devices to be tested to the test infrastructure itself. Test Mapping definitions are JSON files named TEST_MAPPING that can be placed in any source directory.

    Test Mapping categorizes tests via a test group. The name of a test group can be any string. For example, presubmit can be for a group of tests to run when validating changes. And postsubmit tests can be used to validate the builds after changes are merged.

    For the directory requiring test coverage, simply add a TEST_MAPPING JSON file resembling the example below. These rules will ensure the tests run in presubmit checks when any files are touched in that directory or any of its subdirectories.

    Here is a sample TEST_MAPPING file:
    {
      "presubmit": [
        {
          "name": "CtsAccessibilityServiceTestCases",
          "options": [
            {
              "include-annotation": "android.platform.test.annotations.Presubmit"
            }
          ]
        }
      ],
      "postsubmit": [
        {
          "name": "CtsWindowManagerDeviceTestCases"
        }
      ],
      "imports": [
        {
          "path": "frameworks/base/services/core/java/com/android/server/am"
        }
      ]
    }

    Running tests locally with Atest

    Atest is a command line tool that allows developers to build, install, and run Android tests locally, greatly speeding test re-runs without requiring knowledge of Trade Federation Test Harness command line options.

    Atest commands take the following form:
    atest [optional-arguments] test-to-run

    You can run one or more tests by separating test references with spaces, like so:
    atest test-to-run-1 test-to-run-2

    To run an entire test module, use its module name. Input the name as it appears in the LOCAL_MODULE or LOCAL_PACKAGE_NAME variables in that test's Android.mk or Android.bp file.

    For example:
    atest FrameworksServicesTests
    atest CtsJankDeviceTestCases

    Discovering tests with Atest and TEST MAPPING

    Atest and TEST MAPPING work together to solve the problem of test discovery, i.e. what tests need to be run when a directory of code is edited. For example, to execute all presubmit test rules for a given directory locally:

    1. Go to the directory containing the TEST_MAPPING file.
    2. Run the command: atest
    All presubmit tests configured in the TEST_MAPPING files of the current directory and its parent directories are run. Atest will locate and run two tests for presubmit.

    Finding more testing documentation

    Further, introductory testing documents were published on source.android.com to support Soong and platform testing in general:
    In addition to exposing more testing documentation, Android has recently opened up build infrastructure to monitor submissions through ci.android.com. See the More visibility into the Android Open Source Project blog post and the Continuous Integration Dashboard for instructions on viewing build status and downloading build artifacts.

    Android EngProd endeavors to bring you more previously internal-only features to make your life easier. Watch this Google Testing Blog, the Android Developers Blog, and source.android.com for future enhancements.
    2 comments