diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 1f0d69ce..2924e0fa 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,158 +1,158 @@ # How to become a contributor and submit your own code ## Contributor License Agreements We'd love to accept your patches! Before we can take them, we have to jump a couple of legal hurdles. Please fill out either the individual or corporate Contributor License Agreement (CLA). * If you are an individual writing original source code and you're sure you own the intellectual property, then you'll need to sign an [individual CLA](https://developers.google.com/open-source/cla/individual). * If you work for a company that wants to allow you to contribute your work, then you'll need to sign a [corporate CLA](https://developers.google.com/open-source/cla/corporate). Follow either of the two links above to access the appropriate CLA and instructions for how to sign and return it. Once we receive it, we'll be able to accept your pull requests. ## Are you a Googler? If you are a Googler, plese make an attempt to submit an internal change rather than a GitHub Pull Request. If you are not able to submit an internal change a PR is acceptable as an alternative. ## Contributing A Patch 1. Submit an issue describing your proposed change to the [issue tracker](https://github.com/google/googletest). -1. Please don't mix more than one logical change per submittal, because it +2. Please don't mix more than one logical change per submittal, because it makes the history hard to follow. If you want to make a change that doesn't have a corresponding issue in the issue tracker, please create one. -1. Also, coordinate with team members that are listed on the issue in question. +3. Also, coordinate with team members that are listed on the issue in question. This ensures that work isn't being duplicated and communicating your plan early also generally leads to better patches. -1. If your proposed change is accepted, and you haven't already done so, sign a +4. If your proposed change is accepted, and you haven't already done so, sign a Contributor License Agreement (see details above). -1. Fork the desired repo, develop and test your code changes. -1. Ensure that your code adheres to the existing style in the sample to which +5. Fork the desired repo, develop and test your code changes. +6. Ensure that your code adheres to the existing style in the sample to which you are contributing. -1. Ensure that your code has an appropriate set of unit tests which all pass. -1. Submit a pull request. +7. Ensure that your code has an appropriate set of unit tests which all pass. +8. Submit a pull request. ## The Google Test and Google Mock Communities The Google Test community exists primarily through the [discussion group](http://groups.google.com/group/googletestframework) and the GitHub repository. Likewise, the Google Mock community exists primarily through their own [discussion group](http://groups.google.com/group/googlemock). You are definitely encouraged to contribute to the discussion and you can also help us to keep the effectiveness of the group high by following and promoting the guidelines listed here. ### Please Be Friendly Showing courtesy and respect to others is a vital part of the Google culture, and we strongly encourage everyone participating in Google Test development to join us in accepting nothing less. Of course, being courteous is not the same as failing to constructively disagree with each other, but it does mean that we should be respectful of each other when enumerating the 42 technical reasons that a particular proposal may not be the best choice. There's never a reason to be antagonistic or dismissive toward anyone who is sincerely trying to contribute to a discussion. Sure, C++ testing is serious business and all that, but it's also a lot of fun. Let's keep it that way. Let's strive to be one of the friendliest communities in all of open source. As always, discuss Google Test in the official GoogleTest discussion group. You don't have to actually submit code in order to sign up. Your participation itself is a valuable contribution. ## Style To keep the source consistent, readable, diffable and easy to merge, we use a fairly rigid coding style, as defined by the [google-styleguide](https://github.com/google/styleguide) project. All patches will be expected to conform to the style outlined [here](https://google.github.io/styleguide/cppguide.html). Use [.clang-format](https://github.com/google/googletest/blob/master/.clang-format) to check your formatting ## Requirements for Contributors If you plan to contribute a patch, you need to build Google Test, Google Mock, and their own tests from a git checkout, which has further requirements: * [Python](https://www.python.org/) v2.3 or newer (for running some of the tests and re-generating certain source files from templates) * [CMake](https://cmake.org/) v2.6.4 or newer * [GNU Build System](https://en.wikipedia.org/wiki/GNU_Build_System) including automake (>= 1.9), autoconf (>= 2.59), and libtool / libtoolize. ## Developing Google Test This section discusses how to make your own changes to Google Test. ### Testing Google Test Itself To make sure your changes work as intended and don't break existing functionality, you'll want to compile and run Google Test's own tests. For that you can use CMake: mkdir mybuild cd mybuild cmake -Dgtest_build_tests=ON ${GTEST_DIR} Make sure you have Python installed, as some of Google Test's tests are written in Python. If the cmake command complains about not being able to find Python (`Could NOT find PythonInterp (missing: PYTHON_EXECUTABLE)`), try telling it explicitly where your Python executable can be found: cmake -DPYTHON_EXECUTABLE=path/to/python -Dgtest_build_tests=ON ${GTEST_DIR} Next, you can build Google Test and all of its own tests. On \*nix, this is usually done by 'make'. To run the tests, do make test All tests should pass. ### Regenerating Source Files Some of Google Test's source files are generated from templates (not in the C++ sense) using a script. For example, the file include/gtest/internal/gtest-type-util.h.pump is used to generate gtest-type-util.h in the same directory. You don't need to worry about regenerating the source files unless you need to modify them. You would then modify the corresponding `.pump` files and run the '[pump.py](googletest/scripts/pump.py)' generator script. See the [Pump Manual](googletest/g3doc/PumpManual.md). ## Developing Google Mock This section discusses how to make your own changes to Google Mock. #### Testing Google Mock Itself To make sure your changes work as intended and don't break existing functionality, you'll want to compile and run Google Test's own tests. For that you'll need Autotools. First, make sure you have followed the instructions above to configure Google Mock. Then, create a build output directory and enter it. Next, ${GMOCK_DIR}/configure # try --help for more info Once you have successfully configured Google Mock, the build steps are standard for GNU-style OSS packages. make # Standard makefile following GNU conventions make check # Builds and runs all tests - all should pass. Note that when building your project against Google Mock, you are building against Google Test as well. There is no need to configure Google Test separately. diff --git a/googlemock/docs/cheat_sheet.md b/googlemock/docs/cheat_sheet.md index e839fa9d..633fda06 100644 --- a/googlemock/docs/cheat_sheet.md +++ b/googlemock/docs/cheat_sheet.md @@ -1,897 +1,897 @@ ## gMock Cheat Sheet ### Defining a Mock Class #### Mocking a Normal Class {#MockClass} Given ```cpp class Foo { ... virtual ~Foo(); virtual int GetSize() const = 0; virtual string Describe(const char* name) = 0; virtual string Describe(int type) = 0; virtual bool Process(Bar elem, int count) = 0; }; ``` (note that `~Foo()` **must** be virtual) we can define its mock as ```cpp #include "gmock/gmock.h" class MockFoo : public Foo { ... MOCK_METHOD(int, GetSize, (), (const, override)); MOCK_METHOD(string, Describe, (const char* name), (override)); MOCK_METHOD(string, Describe, (int type), (override)); MOCK_METHOD(bool, Process, (Bar elem, int count), (override)); }; ``` To create a "nice" mock, which ignores all uninteresting calls, a "naggy" mock, which warns on all uninteresting calls, or a "strict" mock, which treats them as failures: ```cpp using ::testing::NiceMock; using ::testing::NaggyMock; using ::testing::StrictMock; NiceMock nice_foo; // The type is a subclass of MockFoo. NaggyMock naggy_foo; // The type is a subclass of MockFoo. StrictMock strict_foo; // The type is a subclass of MockFoo. ``` **Note:** A mock object is currently naggy by default. We may make it nice by default in the future. #### Mocking a Class Template {#MockTemplate} Class templates can be mocked just like any class. To mock ```cpp template class StackInterface { ... virtual ~StackInterface(); virtual int GetSize() const = 0; virtual void Push(const Elem& x) = 0; }; ``` (note that all member functions that are mocked, including `~StackInterface()` **must** be virtual). ```cpp template class MockStack : public StackInterface { ... MOCK_METHOD(int, GetSize, (), (const, override)); MOCK_METHOD(void, Push, (const Elem& x), (override)); }; ``` #### Specifying Calling Conventions for Mock Functions If your mock function doesn't use the default calling convention, you can specify it by adding `Calltype(convention)` to `MOCK_METHOD`'s 4th parameter. For example, ```cpp MOCK_METHOD(bool, Foo, (int n), (Calltype(STDMETHODCALLTYPE))); MOCK_METHOD(int, Bar, (double x, double y), (const, Calltype(STDMETHODCALLTYPE))); ``` where `STDMETHODCALLTYPE` is defined by `` on Windows. ### Using Mocks in Tests {#UsingMocks} The typical work flow is: 1. Import the gMock names you need to use. All gMock symbols are in the `testing` namespace unless they are macros or otherwise noted. 2. Create the mock objects. 3. Optionally, set the default actions of the mock objects. 4. Set your expectations on the mock objects (How will they be called? What will they do?). 5. Exercise code that uses the mock objects; if necessary, check the result using googletest assertions. 6. When a mock object is destructed, gMock automatically verifies that all expectations on it have been satisfied. Here's an example: ```cpp using ::testing::Return; // #1 TEST(BarTest, DoesThis) { MockFoo foo; // #2 ON_CALL(foo, GetSize()) // #3 .WillByDefault(Return(1)); // ... other default actions ... EXPECT_CALL(foo, Describe(5)) // #4 .Times(3) .WillRepeatedly(Return("Category 5")); // ... other expectations ... EXPECT_EQ("good", MyProductionFunction(&foo)); // #5 } // #6 ``` ### Setting Default Actions {#OnCall} gMock has a **built-in default action** for any function that returns `void`, `bool`, a numeric value, or a pointer. In C++11, it will additionally returns the default-constructed value, if one exists for the given type. To customize the default action for functions with return type *`T`*: ```cpp using ::testing::DefaultValue; // Sets the default value to be returned. T must be CopyConstructible. DefaultValue::Set(value); // Sets a factory. Will be invoked on demand. T must be MoveConstructible. // T MakeT(); DefaultValue::SetFactory(&MakeT); // ... use the mocks ... // Resets the default value. DefaultValue::Clear(); ``` Example usage: ```cpp // Sets the default action for return type std::unique_ptr to // creating a new Buzz every time. DefaultValue>::SetFactory( [] { return MakeUnique(AccessLevel::kInternal); }); // When this fires, the default action of MakeBuzz() will run, which // will return a new Buzz object. EXPECT_CALL(mock_buzzer_, MakeBuzz("hello")).Times(AnyNumber()); auto buzz1 = mock_buzzer_.MakeBuzz("hello"); auto buzz2 = mock_buzzer_.MakeBuzz("hello"); EXPECT_NE(nullptr, buzz1); EXPECT_NE(nullptr, buzz2); EXPECT_NE(buzz1, buzz2); // Resets the default action for return type std::unique_ptr, // to avoid interfere with other tests. DefaultValue>::Clear(); ``` To customize the default action for a particular method of a specific mock object, use `ON_CALL()`. `ON_CALL()` has a similar syntax to `EXPECT_CALL()`, but it is used for setting default behaviors (when you do not require that the mock method is called). See go/prefer-on-call for a more detailed discussion. ```cpp ON_CALL(mock-object, method(matchers)) .With(multi-argument-matcher) ? .WillByDefault(action); ``` ### Setting Expectations {#ExpectCall} `EXPECT_CALL()` sets **expectations** on a mock method (How will it be called? What will it do?): ```cpp EXPECT_CALL(mock-object, method (matchers)?) .With(multi-argument-matcher) ? .Times(cardinality) ? .InSequence(sequences) * .After(expectations) * .WillOnce(action) * .WillRepeatedly(action) ? .RetiresOnSaturation(); ? ``` If `(matchers)` is omitted, the expectation is the same as if the matchers were set to anything matchers (for example, `(_, _, _, _)` for a four-arg method). If `Times()` is omitted, the cardinality is assumed to be: * `Times(1)` when there is neither `WillOnce()` nor `WillRepeatedly()`; * `Times(n)` when there are `n` `WillOnce()`s but no `WillRepeatedly()`, where `n` >= 1; or * `Times(AtLeast(n))` when there are `n` `WillOnce()`s and a `WillRepeatedly()`, where `n` >= 0. A method with no `EXPECT_CALL()` is free to be invoked *any number of times*, and the default action will be taken each time. ### Matchers {#MatcherList} A **matcher** matches a *single* argument. You can use it inside `ON_CALL()` or `EXPECT_CALL()`, or use it to validate a value directly: | Matcher | Description | | :----------------------------------- | :------------------------------------ | | `EXPECT_THAT(actual_value, matcher)` | Asserts that `actual_value` matches | : : `matcher`. : | `ASSERT_THAT(actual_value, matcher)` | The same as | : : `EXPECT_THAT(actual_value, matcher)`, : : : except that it generates a **fatal** : : : failure. : Built-in matchers (where `argument` is the function argument) are divided into several categories: ## Wildcard Matcher | Description :-------------------------- | :----------------------------------------------- `_` | `argument` can be any value of the correct type. `A()` or `An()` | `argument` can be any value of type `type`. #### Generic Comparison | Matcher | Description | | :--------------------- | :-------------------------------------------------- | | `Eq(value)` or `value` | `argument == value` | | `Ge(value)` | `argument >= value` | | `Gt(value)` | `argument > value` | | `Le(value)` | `argument <= value` | | `Lt(value)` | `argument < value` | | `Ne(value)` | `argument != value` | | `IsNull()` | `argument` is a `NULL` pointer (raw or smart). | | `NotNull()` | `argument` is a non-null pointer (raw or smart). | | `Optional(m)` | `argument` is `optional<>` that contains a value | : : matching `m`. : | `VariantWith(m)` | `argument` is `variant<>` that holds the | : : alternative of type T with a value matching `m`. : | `Ref(variable)` | `argument` is a reference to `variable`. | | `TypedEq(value)` | `argument` has type `type` and is equal to `value`. | : : You may need to use this instead of `Eq(value)` : : : when the mock function is overloaded. : Except `Ref()`, these matchers make a *copy* of `value` in case it's modified or destructed later. If the compiler complains that `value` doesn't have a public copy constructor, try wrap it in `ByRef()`, e.g. `Eq(ByRef(non_copyable_value))`. If you do that, make sure `non_copyable_value` is not changed afterwards, or the meaning of your matcher will be changed. #### Floating-Point Matchers {#FpMatchers} | Matcher | Description | | :------------------------------- | :--------------------------------- | | `DoubleEq(a_double)` | `argument` is a `double` value | : : approximately equal to `a_double`, : : : treating two NaNs as unequal. : | `FloatEq(a_float)` | `argument` is a `float` value | : : approximately equal to `a_float`, : : : treating two NaNs as unequal. : | `NanSensitiveDoubleEq(a_double)` | `argument` is a `double` value | : : approximately equal to `a_double`, : : : treating two NaNs as equal. : | `NanSensitiveFloatEq(a_float)` | `argument` is a `float` value | : : approximately equal to `a_float`, : : : treating two NaNs as equal. : The above matchers use ULP-based comparison (the same as used in googletest). They automatically pick a reasonable error bound based on the absolute value of the expected value. `DoubleEq()` and `FloatEq()` conform to the IEEE standard, which requires comparing two NaNs for equality to return false. The `NanSensitive*` version instead treats two NaNs as equal, which is often what a user wants. | Matcher | Description | | :---------------------------------- | :------------------------------------- | | `DoubleNear(a_double, | `argument` is a `double` value close | : max_abs_error)` : to `a_double` (absolute error <= : : : `max_abs_error`), treating two NaNs as : : : unequal. : | `FloatNear(a_float, max_abs_error)` | `argument` is a `float` value close to | : : `a_float` (absolute error <= : : : `max_abs_error`), treating two NaNs as : : : unequal. : | `NanSensitiveDoubleNear(a_double, | `argument` is a `double` value close | : max_abs_error)` : to `a_double` (absolute error <= : : : `max_abs_error`), treating two NaNs as : : : equal. : | `NanSensitiveFloatNear(a_float, | `argument` is a `float` value close to | : max_abs_error)` : `a_float` (absolute error <= : : : `max_abs_error`), treating two NaNs as : : : equal. : #### String Matchers The `argument` can be either a C string or a C++ string object: | Matcher | Description | | :---------------------- | :------------------------------------------------- | | `ContainsRegex(string)` | `argument` matches the given regular expression. | | `EndsWith(suffix)` | `argument` ends with string `suffix`. | | `HasSubstr(string)` | `argument` contains `string` as a sub-string. | | `MatchesRegex(string)` | `argument` matches the given regular expression | : : with the match starting at the first character and : : : ending at the last character. : | `StartsWith(prefix)` | `argument` starts with string `prefix`. | | `StrCaseEq(string)` | `argument` is equal to `string`, ignoring case. | | `StrCaseNe(string)` | `argument` is not equal to `string`, ignoring | : : case. : | `StrEq(string)` | `argument` is equal to `string`. | | `StrNe(string)` | `argument` is not equal to `string`. | `ContainsRegex()` and `MatchesRegex()` take ownership of the `RE` object. They use the regular expression syntax defined [here](http://go/gunit-advanced-regex). `StrCaseEq()`, `StrCaseNe()`, `StrEq()`, and `StrNe()` work for wide strings as well. #### Container Matchers Most STL-style containers support `==`, so you can use `Eq(expected_container)` or simply `expected_container` to match a container exactly. If you want to write the elements in-line, match them more flexibly, or get more informative messages, you can use: | Matcher | Description | | :---------------------------------------- | :------------------------------- | | `BeginEndDistanceIs(m)` | `argument` is a container whose | : : `begin()` and `end()` iterators : : : are separated by a number of : : : increments matching `m`. E.g. : : : `BeginEndDistanceIs(2)` or : : : `BeginEndDistanceIs(Lt(2))`. For : : : containers that define a : : : `size()` method, `SizeIs(m)` may : : : be more efficient. : | `ContainerEq(container)` | The same as `Eq(container)` | : : except that the failure message : : : also includes which elements are : : : in one container but not the : : : other. : | `Contains(e)` | `argument` contains an element | : : that matches `e`, which can be : : : either a value or a matcher. : | `Each(e)` | `argument` is a container where | : : *every* element matches `e`, : : : which can be either a value or a : : : matcher. : | `ElementsAre(e0, e1, ..., en)` | `argument` has `n + 1` elements, | : : where the *i*-th element matches : : : `ei`, which can be a value or a : : : matcher. : | `ElementsAreArray({e0, e1, ..., en})`, | The same as `ElementsAre()` | : `ElementsAreArray(a_container)`, : except that the expected element : : `ElementsAreArray(begin, end)`, : values/matchers come from an : : `ElementsAreArray(array)`, or : initializer list, STL-style : : `ElementsAreArray(array, count)` : container, iterator range, or : : : C-style array. : | `IsEmpty()` | `argument` is an empty container | : : (`container.empty()`). : | `IsFalse()` | `argument` evaluates to `false` | : : in a Boolean context. : | `IsSubsetOf({e0, e1, ..., en})`, | `argument` matches | : `IsSubsetOf(a_container)`, : `UnorderedElementsAre(x0, x1, : : `IsSubsetOf(begin, end)`, : ..., xk)` for some subset `{x0, : : `IsSubsetOf(array)`, or : x1, ..., xk}` of the expected : : `IsSubsetOf(array, count)` : matchers. : | `IsSupersetOf({e0, e1, ..., en})`, | Some subset of `argument` | : `IsSupersetOf(a_container)`, : matches : : `IsSupersetOf(begin, end)`, : `UnorderedElementsAre(`expected : : `IsSupersetOf(array)`, or : matchers`)`. : : `IsSupersetOf(array, count)` : : | `IsTrue()` | `argument` evaluates to `true` | : : in a Boolean context. : | `Pointwise(m, container)`, `Pointwise(m, | `argument` contains the same | : {e0, e1, ..., en})` : number of elements as in : : : `container`, and for all i, (the : : : i-th element in `argument`, the : : : i-th element in `container`) : : : match `m`, which is a matcher on : : : 2-tuples. E.g. `Pointwise(Le(), : : : upper_bounds)` verifies that : : : each element in `argument` : : : doesn't exceed the corresponding : : : element in `upper_bounds`. See : : : more detail below. : | `SizeIs(m)` | `argument` is a container whose | : : size matches `m`. E.g. : : : `SizeIs(2)` or `SizeIs(Lt(2))`. : | `UnorderedElementsAre(e0, e1, ..., en)` | `argument` has `n + 1` elements, | : : and under *some* permutation of : : : the elements, each element : : : matches an `ei` (for a different : : : `i`), which can be a value or a : : : matcher. : | `UnorderedElementsAreArray({e0, e1, ..., | The same as | : en})`, : `UnorderedElementsAre()` except : : `UnorderedElementsAreArray(a_container)`, : that the expected element : : `UnorderedElementsAreArray(begin, end)`, : values/matchers come from an : : `UnorderedElementsAreArray(array)`, or : initializer list, STL-style : : `UnorderedElementsAreArray(array, count)` : container, iterator range, or : : : C-style array. : | `UnorderedPointwise(m, container)`, | Like `Pointwise(m, container)`, | : `UnorderedPointwise(m, {e0, e1, ..., : but ignores the order of : : en})` : elements. : | `WhenSorted(m)` | When `argument` is sorted using | : : the `<` operator, it matches : : : container matcher `m`. E.g. : : : `WhenSorted(ElementsAre(1, 2, : : : 3))` verifies that `argument` : : : contains elements 1, 2, and 3, : : : ignoring order. : | `WhenSortedBy(comparator, m)` | The same as `WhenSorted(m)`, | : : except that the given comparator : : : instead of `<` is used to sort : : : `argument`. E.g. : : : `WhenSortedBy(std\:\:greater(), : : : ElementsAre(3, 2, 1))`. : **Notes:** * These matchers can also match: 1. a native array passed by reference (e.g. in `Foo(const int (&a)[5])`), and 2. an array passed as a pointer and a count (e.g. in `Bar(const T* buffer, int len)` -- see [Multi-argument Matchers](#MultiArgMatchers)). * The array being matched may be multi-dimensional (i.e. its elements can be arrays). * `m` in `Pointwise(m, ...)` should be a matcher for `::std::tuple` where `T` and `U` are the element type of the actual container and the expected container, respectively. For example, to compare two `Foo` containers where `Foo` doesn't support `operator==`, one might write: ```cpp using ::std::get; MATCHER(FooEq, "") { return std::get<0>(arg).Equals(std::get<1>(arg)); } ... EXPECT_THAT(actual_foos, Pointwise(FooEq(), expected_foos)); ``` #### Member Matchers | Matcher | Description | | :------------------------------ | :----------------------------------------- | | `Field(&class::field, m)` | `argument.field` (or `argument->field` | : : when `argument` is a plain pointer) : : : matches matcher `m`, where `argument` is : : : an object of type _class_. : | `Key(e)` | `argument.first` matches `e`, which can be | : : either a value or a matcher. E.g. : : : `Contains(Key(Le(5)))` can verify that a : : : `map` contains a key `<= 5`. : | `Pair(m1, m2)` | `argument` is an `std::pair` whose `first` | : : field matches `m1` and `second` field : : : matches `m2`. : | `Property(&class::property, m)` | `argument.property()` (or | : : `argument->property()` when `argument` is : : : a plain pointer) matches matcher `m`, : : : where `argument` is an object of type : : : _class_. : #### Matching the Result of a Function, Functor, or Callback | Matcher | Description | | :--------------- | :------------------------------------------------ | | `ResultOf(f, m)` | `f(argument)` matches matcher `m`, where `f` is a | : : function or functor. : #### Pointer Matchers | Matcher | Description | | :------------------------ | :---------------------------------------------- | | `Pointee(m)` | `argument` (either a smart pointer or a raw | : : pointer) points to a value that matches matcher : : : `m`. : | `WhenDynamicCastTo(m)` | when `argument` is passed through | : : `dynamic_cast()`, it matches matcher `m`. : #### Multi-argument Matchers {#MultiArgMatchers} Technically, all matchers match a *single* value. A "multi-argument" matcher is just one that matches a *tuple*. The following matchers can be used to match a tuple `(x, y)`: Matcher | Description :------ | :---------- `Eq()` | `x == y` `Ge()` | `x >= y` `Gt()` | `x > y` `Le()` | `x <= y` `Lt()` | `x < y` `Ne()` | `x != y` You can use the following selectors to pick a subset of the arguments (or reorder them) to participate in the matching: | Matcher | Description | | :------------------------- | :---------------------------------------------- | | `AllArgs(m)` | Equivalent to `m`. Useful as syntactic sugar in | : : `.With(AllArgs(m))`. : | `Args(m)` | The tuple of the `k` selected (using 0-based | : : indices) arguments matches `m`, e.g. `Args<1, : : : 2>(Eq())`. : #### Composite Matchers You can make a matcher from one or more other matchers: | Matcher | Description | | :----------------------- | :---------------------------------------------- | | `AllOf(m1, m2, ..., mn)` | `argument` matches all of the matchers `m1` to | : : `mn`. : | `AnyOf(m1, m2, ..., mn)` | `argument` matches at least one of the matchers | : : `m1` to `mn`. : | `Not(m)` | `argument` doesn't match matcher `m`. | #### Adapters for Matchers | Matcher | Description | | :---------------------- | :------------------------------------ | | `MatcherCast(m)` | casts matcher `m` to type | : : `Matcher`. : | `SafeMatcherCast(m)` | [safely | : : casts](cook_book.md#casting-matchers) : : : matcher `m` to type `Matcher`. : | `Truly(predicate)` | `predicate(argument)` returns | : : something considered by C++ to be : : : true, where `predicate` is a function : : : or functor. : `AddressSatisfies(callback)` and `Truly(callback)` take ownership of `callback`, which must be a permanent callback. #### Matchers as Predicates {#MatchersAsPredicatesCheat} | Matcher | Description | | :---------------------------- | :------------------------------------------ | | `Matches(m)(value)` | evaluates to `true` if `value` matches `m`. | : : You can use `Matches(m)` alone as a unary : : : functor. : | `ExplainMatchResult(m, value, | evaluates to `true` if `value` matches `m`, | : result_listener)` : explaining the result to `result_listener`. : | `Value(value, m)` | evaluates to `true` if `value` matches `m`. | #### Defining Matchers | Matcher | Description | | :----------------------------------- | :------------------------------------ | | `MATCHER(IsEven, "") { return (arg % | Defines a matcher `IsEven()` to match | : 2) == 0; }` : an even number. : | `MATCHER_P(IsDivisibleBy, n, "") { | Defines a macher `IsDivisibleBy(n)` | : *result_listener << "where the : to match a number divisible by `n`. : : remainder is " << (arg % n); return : : : (arg % n) == 0; }` : : | `MATCHER_P2(IsBetween, a, b, | Defines a matcher `IsBetween(a, b)` | : std\:\:string(negation ? "isn't" \: : to match a value in the range [`a`, : : "is") + " between " + : `b`]. : : PrintToString(a) + " and " + : : : PrintToString(b)) { return a <= arg : : : && arg <= b; }` : : **Notes:** 1. The `MATCHER*` macros cannot be used inside a function or class. -1. The matcher body must be *purely functional* (i.e. it cannot have any side +2. The matcher body must be *purely functional* (i.e. it cannot have any side effect, and the result must not depend on anything other than the value being matched and the matcher parameters). -1. You can use `PrintToString(x)` to convert a value `x` of any type to a +3. You can use `PrintToString(x)` to convert a value `x` of any type to a string. ## Matchers as Test Assertions Matcher | Description :--------------------------- | :---------- `ASSERT_THAT(expression, m)` | Generates a [fatal failure](../../googletest/docs/primer.md#assertions) if the value of `expression` doesn't match matcher `m`. `EXPECT_THAT(expression, m)` | Generates a non-fatal failure if the value of `expression` doesn't match matcher `m`. ### Actions {#ActionList} **Actions** specify what a mock function should do when invoked. #### Returning a Value | Matcher | Description | | :-------------------------- | :-------------------------------------------- | | `Return()` | Return from a `void` mock function. | | `Return(value)` | Return `value`. If the type of `value` is | : : different to the mock function's return type, : : : `value` is converted to the latter type at : : : the time the expectation is set, not when : : : the action is executed. : | `ReturnArg()` | Return the `N`-th (0-based) argument. | | `ReturnNew(a1, ..., ak)` | Return `new T(a1, ..., ak)`; a different | : : object is created each time. : | `ReturnNull()` | Return a null pointer. | | `ReturnPointee(ptr)` | Return the value pointed to by `ptr`. | | `ReturnRef(variable)` | Return a reference to `variable`. | | `ReturnRefOfCopy(value)` | Return a reference to a copy of `value`; the | : : copy lives as long as the action. : #### Side Effects | Matcher | Description | | :--------------------------------- | :-------------------------------------- | | `Assign(&variable, value)` | Assign `value` to variable. | | `DeleteArg()` | Delete the `N`-th (0-based) argument, | : : which must be a pointer. : | `SaveArg(pointer)` | Save the `N`-th (0-based) argument to | : : `*pointer`. : | `SaveArgPointee(pointer)` | Save the value pointed to by the `N`-th | : : (0-based) argument to `*pointer`. : | `SetArgReferee(value)` | Assign value to the variable referenced | : : by the `N`-th (0-based) argument. : | `SetArgPointee(value)` | Assign `value` to the variable pointed | : : by the `N`-th (0-based) argument. : | `SetArgumentPointee(value)` | Same as `SetArgPointee(value)`. | : : Deprecated. Will be removed in v1.7.0. : | `SetArrayArgument(first, last)` | Copies the elements in source range | : : [`first`, `last`) to the array pointed : : : to by the `N`-th (0-based) argument, : : : which can be either a pointer or an : : : iterator. The action does not take : : : ownership of the elements in the source : : : range. : | `SetErrnoAndReturn(error, value)` | Set `errno` to `error` and return | : : `value`. : | `Throw(exception)` | Throws the given exception, which can | : : be any copyable value. Available since : : : v1.1.0. : #### Using a Function, Functor, Lambda, or Callback as an Action In the following, by "callable" we mean a free function, `std::function`, functor, lambda, or `google3`-style permanent callback. | Matcher | Description | | :---------------------------------- | :------------------------------------- | | `Invoke(f)` | Invoke `f` with the arguments passed | : : to the mock function, where `f` can be : : : a global/static function or a functor. : | `Invoke(object_pointer, | Invoke the {method on the object with | : &class\:\:method)` : the arguments passed to the mock : : : function. : | `InvokeWithoutArgs(f)` | Invoke `f`, which can be a | : : global/static function or a functor. : : : `f` must take no arguments. : | `InvokeWithoutArgs(object_pointer, | Invoke the method on the object, which | : &class\:\:method)` : takes no arguments. : | `InvokeArgument(arg1, arg2, ..., | Invoke the mock function's `N`-th | : argk)` : (0-based) argument, which must be a : : : function or a functor, with the `k` : : : arguments. : The return value of the invoked function is used as the return value of the action. When defining a callable to be used with `Invoke*()`, you can declare any unused parameters as `Unused`: ```cpp using ::testing::Invoke; double Distance(Unused, double x, double y) { return sqrt(x*x + y*y); } ... EXPECT_CALL(mock, Foo("Hi", _, _)).WillOnce(Invoke(Distance)); ``` `Invoke(callback)` and `InvokeWithoutArgs(callback)` take ownership of `callback`, which must be permanent. The type of `callback` must be a base callback type instead of a derived one, e.g. ```cpp BlockingClosure* done = new BlockingClosure; ... Invoke(done) ...; // This won't compile! Closure* done2 = new BlockingClosure; ... Invoke(done2) ...; // This works. ``` In `InvokeArgument(...)`, if an argument needs to be passed by reference, wrap it inside `ByRef()`. For example, ```cpp using ::testing::ByRef; using ::testing::InvokeArgument; ... InvokeArgument<2>(5, string("Hi"), ByRef(foo)) ``` calls the mock function's #2 argument, passing to it `5` and `string("Hi")` by value, and `foo` by reference. ## Default Action | Matcher | Description | | :------------ | :----------------------------------------------------- | | `DoDefault()` | Do the default action (specified by `ON_CALL()` or the | : : built-in one). : **Note:** due to technical reasons, `DoDefault()` cannot be used inside a composite action - trying to do so will result in a run-time error. ## Composite Actions | Matcher | Description | | :----------------------------- | :------------------------------------------ | | `DoAll(a1, a2, ..., an)` | Do all actions `a1` to `an` and return the | : : result of `an` in each invocation. The : : : first `n - 1` sub-actions must return void. : | `IgnoreResult(a)` | Perform action `a` and ignore its result. | : : `a` must not return void. : | `WithArg(a)` | Pass the `N`-th (0-based) argument of the | : : mock function to action `a` and perform it. : | `WithArgs(a)` | Pass the selected (0-based) arguments of | : : the mock function to action `a` and perform : : : it. : | `WithoutArgs(a)` | Perform action `a` without any arguments. | ## Defining Actions | Matcher | Description | | :--------------------------------- | :-------------------------------------- | | `ACTION(Sum) { return arg0 + arg1; | Defines an action `Sum()` to return the | : }` : sum of the mock function's argument #0 : : : and #1. : | `ACTION_P(Plus, n) { return arg0 + | Defines an action `Plus(n)` to return | : n; }` : the sum of the mock function's : : : argument #0 and `n`. : | `ACTION_Pk(Foo, p1, ..., pk) { | Defines a parameterized action `Foo(p1, | : statements; }` : ..., pk)` to execute the given : : : `statements`. : The `ACTION*` macros cannot be used inside a function or class. ### Cardinalities {#CardinalityList} These are used in `Times()` to specify how many times a mock function will be called: | Matcher | Description | | :---------------- | :----------------------------------------------------- | | `AnyNumber()` | The function can be called any number of times. | | `AtLeast(n)` | The call is expected at least `n` times. | | `AtMost(n)` | The call is expected at most `n` times. | | `Between(m, n)` | The call is expected between `m` and `n` (inclusive) | : : times. : | `Exactly(n) or n` | The call is expected exactly `n` times. In particular, | : : the call should never happen when `n` is 0. : ### Expectation Order By default, the expectations can be matched in *any* order. If some or all expectations must be matched in a given order, there are two ways to specify it. They can be used either independently or together. #### The After Clause {#AfterClause} ```cpp using ::testing::Expectation; ... Expectation init_x = EXPECT_CALL(foo, InitX()); Expectation init_y = EXPECT_CALL(foo, InitY()); EXPECT_CALL(foo, Bar()) .After(init_x, init_y); ``` says that `Bar()` can be called only after both `InitX()` and `InitY()` have been called. If you don't know how many pre-requisites an expectation has when you write it, you can use an `ExpectationSet` to collect them: ```cpp using ::testing::ExpectationSet; ... ExpectationSet all_inits; for (int i = 0; i < element_count; i++) { all_inits += EXPECT_CALL(foo, InitElement(i)); } EXPECT_CALL(foo, Bar()) .After(all_inits); ``` says that `Bar()` can be called only after all elements have been initialized (but we don't care about which elements get initialized before the others). Modifying an `ExpectationSet` after using it in an `.After()` doesn't affect the meaning of the `.After()`. #### Sequences {#UsingSequences} When you have a long chain of sequential expectations, it's easier to specify the order using **sequences**, which don't require you to given each expectation in the chain a different name. *All expected calls* in the same sequence must occur in the order they are specified. ```cpp using ::testing::Return; using ::testing::Sequence; Sequence s1, s2; ... EXPECT_CALL(foo, Reset()) .InSequence(s1, s2) .WillOnce(Return(true)); EXPECT_CALL(foo, GetSize()) .InSequence(s1) .WillOnce(Return(1)); EXPECT_CALL(foo, Describe(A())) .InSequence(s2) .WillOnce(Return("dummy")); ``` says that `Reset()` must be called before *both* `GetSize()` *and* `Describe()`, and the latter two can occur in any order. To put many expectations in a sequence conveniently: ```cpp using ::testing::InSequence; { InSequence seq; EXPECT_CALL(...)...; EXPECT_CALL(...)...; ... EXPECT_CALL(...)...; } ``` says that all expected calls in the scope of `seq` must occur in strict order. The name `seq` is irrelevant. ### Verifying and Resetting a Mock gMock will verify the expectations on a mock object when it is destructed, or you can do it earlier: ```cpp using ::testing::Mock; ... // Verifies and removes the expectations on mock_obj; // returns true iff successful. Mock::VerifyAndClearExpectations(&mock_obj); ... // Verifies and removes the expectations on mock_obj; // also removes the default actions set by ON_CALL(); // returns true iff successful. Mock::VerifyAndClear(&mock_obj); ``` You can also tell gMock that a mock object can be leaked and doesn't need to be verified: ```cpp Mock::AllowLeak(&mock_obj); ``` ### Mock Classes gMock defines a convenient mock class template ```cpp class MockFunction { public: MOCK_METHOD(R, Call, (A1, ..., An)); }; ``` See this [recipe](cook_book.md#using-check-points) for one application of it. ### Flags | Flag | Description | | :----------------------------- | :---------------------------------------- | | `--gmock_catch_leaked_mocks=0` | Don't report leaked mock objects as | : : failures. : | `--gmock_verbose=LEVEL` | Sets the default verbosity level (`info`, | : : `warning`, or `error`) of Google Mock : : : messages. : diff --git a/googletest/docs/advanced.md b/googletest/docs/advanced.md index ac7e6890..08db2e4e 100644 --- a/googletest/docs/advanced.md +++ b/googletest/docs/advanced.md @@ -1,2555 +1,2555 @@ # Advanced googletest Topics ## Introduction Now that you have read the [googletest Primer](primer.md) and learned how to write tests using googletest, it's time to learn some new tricks. This document will show you more assertions as well as how to construct complex failure messages, propagate fatal failures, reuse and speed up your test fixtures, and use various flags with your tests. ## More Assertions This section covers some less frequently used, but still significant, assertions. ### Explicit Success and Failure These three assertions do not actually test a value or expression. Instead, they generate a success or failure directly. Like the macros that actually perform a test, you may stream a custom failure message into them. ```c++ SUCCEED(); ``` Generates a success. This does **NOT** make the overall test succeed. A test is considered successful only if none of its assertions fail during its execution. NOTE: `SUCCEED()` is purely documentary and currently doesn't generate any user-visible output. However, we may add `SUCCEED()` messages to googletest's output in the future. ```c++ FAIL(); ADD_FAILURE(); ADD_FAILURE_AT("file_path", line_number); ``` `FAIL()` generates a fatal failure, while `ADD_FAILURE()` and `ADD_FAILURE_AT()` generate a nonfatal failure. These are useful when control flow, rather than a Boolean expression, determines the test's success or failure. For example, you might want to write something like: ```c++ switch(expression) { case 1: ... some checks ... case 2: ... some other checks ... default: FAIL() << "We shouldn't get here."; } ``` NOTE: you can only use `FAIL()` in functions that return `void`. See the [Assertion Placement section](#assertion-placement) for more information. ### Exception Assertions These are for verifying that a piece of code throws (or does not throw) an exception of the given type: Fatal assertion | Nonfatal assertion | Verifies ------------------------------------------ | ------------------------------------------ | -------- `ASSERT_THROW(statement, exception_type);` | `EXPECT_THROW(statement, exception_type);` | `statement` throws an exception of the given type `ASSERT_ANY_THROW(statement);` | `EXPECT_ANY_THROW(statement);` | `statement` throws an exception of any type `ASSERT_NO_THROW(statement);` | `EXPECT_NO_THROW(statement);` | `statement` doesn't throw any exception Examples: ```c++ ASSERT_THROW(Foo(5), bar_exception); EXPECT_NO_THROW({ int n = 5; Bar(&n); }); ``` **Availability**: requires exceptions to be enabled in the build environment ### Predicate Assertions for Better Error Messages Even though googletest has a rich set of assertions, they can never be complete, as it's impossible (nor a good idea) to anticipate all scenarios a user might run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a complex expression, for lack of a better macro. This has the problem of not showing you the values of the parts of the expression, making it hard to understand what went wrong. As a workaround, some users choose to construct the failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this is awkward especially when the expression has side-effects or is expensive to evaluate. googletest gives you three different options to solve this problem: #### Using an Existing Boolean Function If you already have a function or functor that returns `bool` (or a type that can be implicitly converted to `bool`), you can use it in a *predicate assertion* to get the function arguments printed for free: | Fatal assertion | Nonfatal assertion | Verifies | | -------------------- | -------------------- | --------------------------- | | `ASSERT_PRED1(pred1, | `EXPECT_PRED1(pred1, | `pred1(val1)` is true | : val1);` : val1);` : : | `ASSERT_PRED2(pred2, | `EXPECT_PRED2(pred2, | `pred2(val1, val2)` is true | : val1, val2);` : val1, val2);` : : | `...` | `...` | ... | In the above, `predn` is an `n`-ary predicate function or functor, where `val1`, `val2`, ..., and `valn` are its arguments. The assertion succeeds if the predicate returns `true` when applied to the given arguments, and fails otherwise. When the assertion fails, it prints the value of each argument. In either case, the arguments are evaluated exactly once. Here's an example. Given ```c++ // Returns true if m and n have no common divisors except 1. bool MutuallyPrime(int m, int n) { ... } const int a = 3; const int b = 4; const int c = 10; ``` the assertion ```c++ EXPECT_PRED2(MutuallyPrime, a, b); ``` will succeed, while the assertion ```c++ EXPECT_PRED2(MutuallyPrime, b, c); ``` will fail with the message ```none MutuallyPrime(b, c) is false, where b is 4 c is 10 ``` > NOTE: > > 1. If you see a compiler error "no matching function to call" when using > `ASSERT_PRED*` or `EXPECT_PRED*`, please see > [this](faq.md#the-compiler-complains-no-matching-function-to-call-when-i-use-assert-pred-how-do-i-fix-it) > for how to resolve it. #### Using a Function That Returns an AssertionResult While `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is not satisfactory: you have to use different macros for different arities, and it feels more like Lisp than C++. The `::testing::AssertionResult` class solves this problem. An `AssertionResult` object represents the result of an assertion (whether it's a success or a failure, and an associated message). You can create an `AssertionResult` using one of these factory functions: ```c++ namespace testing { // Returns an AssertionResult object to indicate that an assertion has // succeeded. AssertionResult AssertionSuccess(); // Returns an AssertionResult object to indicate that an assertion has // failed. AssertionResult AssertionFailure(); } ``` You can then use the `<<` operator to stream messages to the `AssertionResult` object. To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`), write a predicate function that returns `AssertionResult` instead of `bool`. For example, if you define `IsEven()` as: ```c++ ::testing::AssertionResult IsEven(int n) { if ((n % 2) == 0) return ::testing::AssertionSuccess(); else return ::testing::AssertionFailure() << n << " is odd"; } ``` instead of: ```c++ bool IsEven(int n) { return (n % 2) == 0; } ``` the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print: ```none Value of: IsEven(Fib(4)) Actual: false (3 is odd) Expected: true ``` instead of a more opaque ```none Value of: IsEven(Fib(4)) Actual: false Expected: true ``` If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well (one third of Boolean assertions in the Google code base are negative ones), and are fine with making the predicate slower in the success case, you can supply a success message: ```c++ ::testing::AssertionResult IsEven(int n) { if ((n % 2) == 0) return ::testing::AssertionSuccess() << n << " is even"; else return ::testing::AssertionFailure() << n << " is odd"; } ``` Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print ```none Value of: IsEven(Fib(6)) Actual: true (8 is even) Expected: false ``` #### Using a Predicate-Formatter If you find the default message generated by `(ASSERT|EXPECT)_PRED*` and `(ASSERT|EXPECT)_(TRUE|FALSE)` unsatisfactory, or some arguments to your predicate do not support streaming to `ostream`, you can instead use the following *predicate-formatter assertions* to *fully* customize how the message is formatted: Fatal assertion | Nonfatal assertion | Verifies ------------------------------------------------ | ------------------------------------------------ | -------- `ASSERT_PRED_FORMAT1(pred_format1, val1);` | `EXPECT_PRED_FORMAT1(pred_format1, val1);` | `pred_format1(val1)` is successful `ASSERT_PRED_FORMAT2(pred_format2, val1, val2);` | `EXPECT_PRED_FORMAT2(pred_format2, val1, val2);` | `pred_format2(val1, val2)` is successful `...` | `...` | ... The difference between this and the previous group of macros is that instead of a predicate, `(ASSERT|EXPECT)_PRED_FORMAT*` take a *predicate-formatter* (`pred_formatn`), which is a function or functor with the signature: ```c++ ::testing::AssertionResult PredicateFormattern(const char* expr1, const char* expr2, ... const char* exprn, T1 val1, T2 val2, ... Tn valn); ``` where `val1`, `val2`, ..., and `valn` are the values of the predicate arguments, and `expr1`, `expr2`, ..., and `exprn` are the corresponding expressions as they appear in the source code. The types `T1`, `T2`, ..., and `Tn` can be either value types or reference types. For example, if an argument has type `Foo`, you can declare it as either `Foo` or `const Foo&`, whichever is appropriate. As an example, let's improve the failure message in `MutuallyPrime()`, which was used with `EXPECT_PRED2()`: ```c++ // Returns the smallest prime common divisor of m and n, // or 1 when m and n are mutually prime. int SmallestPrimeCommonDivisor(int m, int n) { ... } // A predicate-formatter for asserting that two integers are mutually prime. ::testing::AssertionResult AssertMutuallyPrime(const char* m_expr, const char* n_expr, int m, int n) { if (MutuallyPrime(m, n)) return ::testing::AssertionSuccess(); return ::testing::AssertionFailure() << m_expr << " and " << n_expr << " (" << m << " and " << n << ") are not mutually prime, " << "as they have a common divisor " << SmallestPrimeCommonDivisor(m, n); } ``` With this predicate-formatter, we can use ```c++ EXPECT_PRED_FORMAT2(AssertMutuallyPrime, b, c); ``` to generate the message ```none b and c (4 and 10) are not mutually prime, as they have a common divisor 2. ``` As you may have realized, many of the built-in assertions we introduced earlier are special cases of `(EXPECT|ASSERT)_PRED_FORMAT*`. In fact, most of them are indeed defined using `(EXPECT|ASSERT)_PRED_FORMAT*`. ### Floating-Point Comparison Comparing floating-point numbers is tricky. Due to round-off errors, it is very unlikely that two floating-points will match exactly. Therefore, `ASSERT_EQ` 's naive comparison usually doesn't work. And since floating-points can have a wide value range, no single fixed error bound works. It's better to compare by a fixed relative error bound, except for values close to 0 due to the loss of precision there. In general, for floating-point comparison to make sense, the user needs to carefully choose the error bound. If they don't want or care to, comparing in terms of Units in the Last Place (ULPs) is a good default, and googletest provides assertions to do this. Full details about ULPs are quite long; if you want to learn more, see [here](https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/). #### Floating-Point Macros | Fatal assertion | Nonfatal assertion | Verifies | | ----------------------- | ----------------------- | ----------------------- | | `ASSERT_FLOAT_EQ(val1, | `EXPECT_FLOAT_EQ(val1, | the two `float` values | : val2);` : val2);` : are almost equal : | `ASSERT_DOUBLE_EQ(val1, | `EXPECT_DOUBLE_EQ(val1, | the two `double` values | : val2);` : val2);` : are almost equal : By "almost equal" we mean the values are within 4 ULP's from each other. The following assertions allow you to choose the acceptable error bound: | Fatal assertion | Nonfatal assertion | Verifies | | ------------------ | ------------------------ | ------------------------- | | `ASSERT_NEAR(val1, | `EXPECT_NEAR(val1, val2, | the difference between | : val2, abs_error);` : abs_error);` : `val1` and `val2` doesn't : : : : exceed the given absolute : : : : error : #### Floating-Point Predicate-Format Functions Some floating-point operations are useful, but not that often used. In order to avoid an explosion of new macros, we provide them as predicate-format functions that can be used in predicate assertion macros (e.g. `EXPECT_PRED_FORMAT2`, etc). ```c++ EXPECT_PRED_FORMAT2(::testing::FloatLE, val1, val2); EXPECT_PRED_FORMAT2(::testing::DoubleLE, val1, val2); ``` Verifies that `val1` is less than, or almost equal to, `val2`. You can replace `EXPECT_PRED_FORMAT2` in the above table with `ASSERT_PRED_FORMAT2`. ### Asserting Using gMock Matchers [gMock](../../googlemock) comes with a library of matchers for validating arguments passed to mock objects. A gMock *matcher* is basically a predicate that knows how to describe itself. It can be used in these assertion macros: | Fatal assertion | Nonfatal assertion | Verifies | | ------------------- | ------------------------------ | --------------------- | | `ASSERT_THAT(value, | `EXPECT_THAT(value, matcher);` | value matches matcher | : matcher);` : : : For example, `StartsWith(prefix)` is a matcher that matches a string starting with `prefix`, and you can write: ```c++ using ::testing::StartsWith; ... // Verifies that Foo() returns a string starting with "Hello". EXPECT_THAT(Foo(), StartsWith("Hello")); ``` Read this [recipe](../../googlemock/docs/cook_book.md#using-matchers-in-googletest-assertions) in the gMock Cookbook for more details. gMock has a rich set of matchers. You can do many things googletest cannot do alone with them. For a list of matchers gMock provides, read [this](../../googlemock/docs/cook_book.md##using-matchers). It's easy to write your [own matchers](../../googlemock/docs/cook_book.md#NewMatchers) too. gMock is bundled with googletest, so you don't need to add any build dependency in order to take advantage of this. Just include `"testing/base/public/gmock.h"` and you're ready to go. ### More String Assertions (Please read the [previous](#AssertThat) section first if you haven't.) You can use the gMock [string matchers](../../googlemock/docs/cheat_sheet.md#string-matchers) with `EXPECT_THAT()` or `ASSERT_THAT()` to do more string comparison tricks (sub-string, prefix, suffix, regular expression, and etc). For example, ```c++ using ::testing::HasSubstr; using ::testing::MatchesRegex; ... ASSERT_THAT(foo_string, HasSubstr("needle")); EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+")); ``` If the string contains a well-formed HTML or XML document, you can check whether its DOM tree matches an [XPath expression](http://www.w3.org/TR/xpath/#contents): ```c++ // Currently still in //template/prototemplate/testing:xpath_matcher #include "template/prototemplate/testing/xpath_matcher.h" using prototemplate::testing::MatchesXPath; EXPECT_THAT(html_string, MatchesXPath("//a[text()='click here']")); ``` ### Windows HRESULT assertions These assertions test for `HRESULT` success or failure. Fatal assertion | Nonfatal assertion | Verifies -------------------------------------- | -------------------------------------- | -------- `ASSERT_HRESULT_SUCCEEDED(expression)` | `EXPECT_HRESULT_SUCCEEDED(expression)` | `expression` is a success `HRESULT` `ASSERT_HRESULT_FAILED(expression)` | `EXPECT_HRESULT_FAILED(expression)` | `expression` is a failure `HRESULT` The generated output contains the human-readable error message associated with the `HRESULT` code returned by `expression`. You might use them like this: ```c++ CComPtr shell; ASSERT_HRESULT_SUCCEEDED(shell.CoCreateInstance(L"Shell.Application")); CComVariant empty; ASSERT_HRESULT_SUCCEEDED(shell->ShellExecute(CComBSTR(url), empty, empty, empty, empty)); ``` ### Type Assertions You can call the function ```c++ ::testing::StaticAssertTypeEq(); ``` to assert that types `T1` and `T2` are the same. The function does nothing if the assertion is satisfied. If the types are different, the function call will fail to compile, and the compiler error message will likely (depending on the compiler) show you the actual values of `T1` and `T2`. This is mainly useful inside template code. **Caveat**: When used inside a member function of a class template or a function template, `StaticAssertTypeEq()` is effective only if the function is instantiated. For example, given: ```c++ template class Foo { public: void Bar() { ::testing::StaticAssertTypeEq(); } }; ``` the code: ```c++ void Test1() { Foo foo; } ``` will not generate a compiler error, as `Foo::Bar()` is never actually instantiated. Instead, you need: ```c++ void Test2() { Foo foo; foo.Bar(); } ``` to cause a compiler error. ### Assertion Placement You can use assertions in any C++ function. In particular, it doesn't have to be a method of the test fixture class. The one constraint is that assertions that generate a fatal failure (`FAIL*` and `ASSERT_*`) can only be used in void-returning functions. This is a consequence of Google's not using exceptions. By placing it in a non-void function you'll get a confusing compile error like `"error: void value not ignored as it ought to be"` or `"cannot initialize return object of type 'bool' with an rvalue of type 'void'"` or `"error: no viable conversion from 'void' to 'string'"`. If you need to use fatal assertions in a function that returns non-void, one option is to make the function return the value in an out parameter instead. For example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You need to make sure that `*result` contains some sensible value even when the function returns prematurely. As the function now returns `void`, you can use any assertion inside of it. If changing the function's type is not an option, you should just use assertions that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`. NOTE: Constructors and destructors are not considered void-returning functions, according to the C++ language specification, and so you may not use fatal assertions in them; you'll get a compilation error if you try. Instead, either call `abort` and crash the entire test executable, or put the fatal assertion in a `SetUp`/`TearDown` function; see [constructor/destructor vs. `SetUp`/`TearDown`](faq.md#CtorVsSetUp) WARNING: A fatal assertion in a helper function (private void-returning method) called from a constructor or destructor does not does not terminate the current test, as your intuition might suggest: it merely returns from the constructor or destructor early, possibly leaving your object in a partially-constructed or partially-destructed state! You almost certainly want to `abort` or use `SetUp`/`TearDown` instead. ## Teaching googletest How to Print Your Values When a test assertion such as `EXPECT_EQ` fails, googletest prints the argument values to help you debug. It does this using a user-extensible value printer. This printer knows how to print built-in C++ types, native arrays, STL containers, and any type that supports the `<<` operator. For other types, it prints the raw bytes in the value and hopes that you the user can figure it out. As mentioned earlier, the printer is *extensible*. That means you can teach it to do a better job at printing your particular type than to dump the bytes. To do that, define `<<` for your type: ```c++ // Streams are allowed only for logging. Don't include this for // any other purpose. #include namespace foo { class Bar { // We want googletest to be able to print instances of this. ... // Create a free inline friend function. friend std::ostream& operator<<(std::ostream& os, const Bar& bar) { return os << bar.DebugString(); // whatever needed to print bar to os } }; // If you can't declare the function in the class it's important that the // << operator is defined in the SAME namespace that defines Bar. C++'s look-up // rules rely on that. std::ostream& operator<<(std::ostream& os, const Bar& bar) { return os << bar.DebugString(); // whatever needed to print bar to os } } // namespace foo ``` Sometimes, this might not be an option: your team may consider it bad style to have a `<<` operator for `Bar`, or `Bar` may already have a `<<` operator that doesn't do what you want (and you cannot change it). If so, you can instead define a `PrintTo()` function like this: ```c++ // Streams are allowed only for logging. Don't include this for // any other purpose. #include namespace foo { class Bar { ... friend void PrintTo(const Bar& bar, std::ostream* os) { *os << bar.DebugString(); // whatever needed to print bar to os } }; // If you can't declare the function in the class it's important that PrintTo() // is defined in the SAME namespace that defines Bar. C++'s look-up rules rely // on that. void PrintTo(const Bar& bar, std::ostream* os) { *os << bar.DebugString(); // whatever needed to print bar to os } } // namespace foo ``` If you have defined both `<<` and `PrintTo()`, the latter will be used when googletest is concerned. This allows you to customize how the value appears in googletest's output without affecting code that relies on the behavior of its `<<` operator. If you want to print a value `x` using googletest's value printer yourself, just call `::testing::PrintToString(x)`, which returns an `std::string`: ```c++ vector > bar_ints = GetBarIntVector(); EXPECT_TRUE(IsCorrectBarIntVector(bar_ints)) << "bar_ints = " << ::testing::PrintToString(bar_ints); ``` ## Death Tests In many applications, there are assertions that can cause application failure if a condition is not met. These sanity checks, which ensure that the program is in a known good state, are there to fail at the earliest possible time after some program state is corrupted. If the assertion checks the wrong condition, then the program may proceed in an erroneous state, which could lead to memory corruption, security holes, or worse. Hence it is vitally important to test that such assertion statements work as expected. Since these precondition checks cause the processes to die, we call such tests _death tests_. More generally, any test that checks that a program terminates (except by throwing an exception) in an expected fashion is also a death test. Note that if a piece of code throws an exception, we don't consider it "death" for the purpose of death tests, as the caller of the code could catch the exception and avoid the crash. If you want to verify exceptions thrown by your code, see [Exception Assertions](#ExceptionAssertions). If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see Catching Failures ### How to Write a Death Test googletest has the following macros to support death tests: Fatal assertion | Nonfatal assertion | Verifies ------------------------------------------------ | ------------------------------------------------ | -------- `ASSERT_DEATH(statement, matcher);` | `EXPECT_DEATH(statement, matcher);` | `statement` crashes with the given error `ASSERT_DEATH_IF_SUPPORTED(statement, matcher);` | `EXPECT_DEATH_IF_SUPPORTED(statement, matcher);` | if death tests are supported, verifies that `statement` crashes with the given error; otherwise verifies nothing `ASSERT_EXIT(statement, predicate, matcher);` | `EXPECT_EXIT(statement, predicate, matcher);` | `statement` exits with the given error, and its exit code matches `predicate` where `statement` is a statement that is expected to cause the process to die, `predicate` is a function or function object that evaluates an integer exit status, and `matcher` is either a GMock matcher matching a `const std::string&` or a (Perl) regular expression - either of which is matched against the stderr output of `statement`. For legacy reasons, a bare string (i.e. with no matcher) is interpreted as `ContainsRegex(str)`, **not** `Eq(str)`. Note that `statement` can be *any valid statement* (including *compound statement*) and doesn't have to be an expression. As usual, the `ASSERT` variants abort the current test function, while the `EXPECT` variants do not. > NOTE: We use the word "crash" here to mean that the process terminates with a > *non-zero* exit status code. There are two possibilities: either the process > has called `exit()` or `_exit()` with a non-zero value, or it may be killed by > a signal. > > This means that if `*statement*` terminates the process with a 0 exit code, it > is *not* considered a crash by `EXPECT_DEATH`. Use `EXPECT_EXIT` instead if > this is the case, or if you want to restrict the exit code more precisely. A predicate here must accept an `int` and return a `bool`. The death test succeeds only if the predicate returns `true`. googletest defines a few predicates that handle the most common cases: ```c++ ::testing::ExitedWithCode(exit_code) ``` This expression is `true` if the program exited normally with the given exit code. ```c++ ::testing::KilledBySignal(signal_number) // Not available on Windows. ``` This expression is `true` if the program was killed by the given signal. The `*_DEATH` macros are convenient wrappers for `*_EXIT` that use a predicate that verifies the process' exit code is non-zero. Note that a death test only cares about three things: 1. does `statement` abort or exit the process? 2. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status satisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`) is the exit status non-zero? And 3. does the stderr output match `regex`? In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, it will **not** cause the death test to fail, as googletest assertions don't abort the process. To write a death test, simply use one of the above macros inside your test function. For example, ```c++ TEST(MyDeathTest, Foo) { // This death test uses a compound statement. ASSERT_DEATH({ int n = 5; Foo(&n); }, "Error on line .* of Foo()"); } TEST(MyDeathTest, NormalExit) { EXPECT_EXIT(NormalExit(), ::testing::ExitedWithCode(0), "Success"); } TEST(MyDeathTest, KillMyself) { EXPECT_EXIT(KillMyself(), ::testing::KilledBySignal(SIGKILL), "Sending myself unblockable signal"); } ``` verifies that: * calling `Foo(5)` causes the process to die with the given error message, * calling `NormalExit()` causes the process to print `"Success"` to stderr and exit with exit code 0, and * calling `KillMyself()` kills the process with signal `SIGKILL`. The test function body may contain other assertions and statements as well, if necessary. ### Death Test Naming IMPORTANT: We strongly recommend you to follow the convention of naming your **test suite** (not test) `*DeathTest` when it contains a death test, as demonstrated in the above example. The [Death Tests And Threads](#death-tests-and-threads) section below explains why. If a test fixture class is shared by normal tests and death tests, you can use `using` or `typedef` to introduce an alias for the fixture class and avoid duplicating its code: ```c++ class FooTest : public ::testing::Test { ... }; using FooDeathTest = FooTest; TEST_F(FooTest, DoesThis) { // normal test } TEST_F(FooDeathTest, DoesThat) { // death test } ``` ### Regular Expression Syntax On POSIX systems (e.g. Linux, Cygwin, and Mac), googletest uses the [POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04) syntax. To learn about this syntax, you may want to read this [Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions). On Windows, googletest uses its own simple regular expression implementation. It lacks many features. For example, we don't support union (`"x|y"`), grouping (`"(xy)"`), brackets (`"[xy]"`), and repetition count (`"x{5,7}"`), among others. Below is what we do support (`A` denotes a literal character, period (`.`), or a single `\\ ` escape sequence; `x` and `y` denote regular expressions.): Expression | Meaning ---------- | -------------------------------------------------------------- `c` | matches any literal character `c` `\\d` | matches any decimal digit `\\D` | matches any character that's not a decimal digit `\\f` | matches `\f` `\\n` | matches `\n` `\\r` | matches `\r` `\\s` | matches any ASCII whitespace, including `\n` `\\S` | matches any character that's not a whitespace `\\t` | matches `\t` `\\v` | matches `\v` `\\w` | matches any letter, `_`, or decimal digit `\\W` | matches any character that `\\w` doesn't match `\\c` | matches any literal character `c`, which must be a punctuation `.` | matches any single character except `\n` `A?` | matches 0 or 1 occurrences of `A` `A*` | matches 0 or many occurrences of `A` `A+` | matches 1 or many occurrences of `A` `^` | matches the beginning of a string (not that of each line) `$` | matches the end of a string (not that of each line) `xy` | matches `x` followed by `y` To help you determine which capability is available on your system, googletest defines macros to govern which regular expression it is using. The macros are: `GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If you want your death tests to work in all cases, you can either `#if` on these macros or use the more limited syntax only. ### How It Works Under the hood, `ASSERT_EXIT()` spawns a new process and executes the death test statement in that process. The details of how precisely that happens depend on the platform and the variable ::testing::GTEST_FLAG(death_test_style) (which is initialized from the command-line flag `--gtest_death_test_style`). * On POSIX systems, `fork()` (or `clone()` on Linux) is used to spawn the child, after which: * If the variable's value is `"fast"`, the death test statement is immediately executed. * If the variable's value is `"threadsafe"`, the child process re-executes the unit test binary just as it was originally invoked, but with some extra flags to cause just the single death test under consideration to be run. * On Windows, the child is spawned using the `CreateProcess()` API, and re-executes the binary to cause just the single death test under consideration to be run - much like the `threadsafe` mode on POSIX. Other values for the variable are illegal and will cause the death test to fail. Currently, the flag's default value is **"fast"** 1. the child's exit status satisfies the predicate, and 2. the child's stderr matches the regular expression. If the death test statement runs to completion without dying, the child process will nonetheless terminate, and the assertion fails. ### Death Tests And Threads The reason for the two death test styles has to do with thread safety. Due to well-known problems with forking in the presence of threads, death tests should be run in a single-threaded context. Sometimes, however, it isn't feasible to arrange that kind of environment. For example, statically-initialized modules may start threads before main is ever reached. Once threads have been created, it may be difficult or impossible to clean them up. googletest has three features intended to raise awareness of threading issues. 1. A warning is emitted if multiple threads are running when a death test is encountered. 2. Test suites with a name ending in "DeathTest" are run before all other tests. 3. It uses `clone()` instead of `fork()` to spawn the child process on Linux (`clone()` is not available on Cygwin and Mac), as `fork()` is more likely to cause the child to hang when the parent process has multiple threads. It's perfectly fine to create threads inside a death test statement; they are executed in a separate process and cannot affect the parent. ### Death Test Styles The "threadsafe" death test style was introduced in order to help mitigate the risks of testing in a possibly multithreaded environment. It trades increased test execution time (potentially dramatically so) for improved thread safety. The automated testing framework does not set the style flag. You can choose a particular style of death tests by setting the flag programmatically: ```c++ testing::FLAGS_gtest_death_test_style="threadsafe" ``` You can do this in `main()` to set the style for all death tests in the binary, or in individual tests. Recall that flags are saved before running each test and restored afterwards, so you need not do that yourself. For example: ```c++ int main(int argc, char** argv) { InitGoogle(argv[0], &argc, &argv, true); ::testing::FLAGS_gtest_death_test_style = "fast"; return RUN_ALL_TESTS(); } TEST(MyDeathTest, TestOne) { ::testing::FLAGS_gtest_death_test_style = "threadsafe"; // This test is run in the "threadsafe" style: ASSERT_DEATH(ThisShouldDie(), ""); } TEST(MyDeathTest, TestTwo) { // This test is run in the "fast" style: ASSERT_DEATH(ThisShouldDie(), ""); } ``` ### Caveats The `statement` argument of `ASSERT_EXIT()` can be any valid C++ statement. If it leaves the current function via a `return` statement or by throwing an exception, the death test is considered to have failed. Some googletest macros may return from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid them in `statement`. Since `statement` runs in the child process, any in-memory side effect (e.g. modifying a variable, releasing memory, etc) it causes will *not* be observable in the parent process. In particular, if you release memory in a death test, your program will fail the heap check as the parent process will never see the memory reclaimed. To solve this problem, you can 1. try not to free memory in a death test; 2. free the memory again in the parent process; or 3. do not use the heap checker in your program. Due to an implementation detail, you cannot place multiple death test assertions on the same line; otherwise, compilation will fail with an unobvious error message. Despite the improved thread safety afforded by the "threadsafe" style of death test, thread problems such as deadlock are still possible in the presence of handlers registered with `pthread_atfork(3)`. ## Using Assertions in Sub-routines ### Adding Traces to Assertions If a test sub-routine is called from several places, when an assertion inside it fails, it can be hard to tell which invocation of the sub-routine the failure is from. You can alleviate this problem using extra logging or custom failure messages, but that usually clutters up your tests. A better solution is to use the `SCOPED_TRACE` macro or the `ScopedTrace` utility: ```c++ SCOPED_TRACE(message); ScopedTrace trace("file_path", line_number, message); ``` where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE` macro will cause the current file name, line number, and the given message to be added in every failure message. `ScopedTrace` accepts explicit file name and line number in arguments, which is useful for writing test helpers. The effect will be undone when the control leaves the current lexical scope. For example, ```c++ 10: void Sub1(int n) { 11: EXPECT_EQ(Bar(n), 1); 12: EXPECT_EQ(Bar(n + 1), 2); 13: } 14: 15: TEST(FooTest, Bar) { 16: { 17: SCOPED_TRACE("A"); // This trace point will be included in 18: // every failure in this scope. 19: Sub1(1); 20: } 21: // Now it won't. 22: Sub1(9); 23: } ``` could result in messages like these: ```none path/to/foo_test.cc:11: Failure Value of: Bar(n) Expected: 1 Actual: 2 Trace: path/to/foo_test.cc:17: A path/to/foo_test.cc:12: Failure Value of: Bar(n + 1) Expected: 2 Actual: 3 ``` Without the trace, it would've been difficult to know which invocation of `Sub1()` the two failures come from respectively. (You could add an extra message to each assertion in `Sub1()` to indicate the value of `n`, but that's tedious.) Some tips on using `SCOPED_TRACE`: 1. With a suitable message, it's often enough to use `SCOPED_TRACE` at the beginning of a sub-routine, instead of at each call site. 2. When calling sub-routines inside a loop, make the loop iterator part of the message in `SCOPED_TRACE` such that you can know which iteration the failure is from. 3. Sometimes the line number of the trace point is enough for identifying the particular invocation of a sub-routine. In this case, you don't have to choose a unique message for `SCOPED_TRACE`. You can simply use `""`. 4. You can use `SCOPED_TRACE` in an inner scope when there is one in the outer scope. In this case, all active trace points will be included in the failure messages, in reverse order they are encountered. 5. The trace dump is clickable in Emacs - hit `return` on a line number and you'll be taken to that line in the source file! ### Propagating Fatal Failures A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that when they fail they only abort the _current function_, not the entire test. For example, the following test will segfault: ```c++ void Subroutine() { // Generates a fatal failure and aborts the current function. ASSERT_EQ(1, 2); // The following won't be executed. ... } TEST(FooTest, Bar) { Subroutine(); // The intended behavior is for the fatal failure // in Subroutine() to abort the entire test. // The actual behavior: the function goes on after Subroutine() returns. int* p = NULL; *p = 3; // Segfault! } ``` To alleviate this, googletest provides three different solutions. You could use either exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the `HasFatalFailure()` function. They are described in the following two subsections. #### Asserting on Subroutines with an exception The following code can turn ASSERT-failure into an exception: ```c++ class ThrowListener : public testing::EmptyTestEventListener { void OnTestPartResult(const testing::TestPartResult& result) override { if (result.type() == testing::TestPartResult::kFatalFailure) { throw testing::AssertionException(result); } } }; int main(int argc, char** argv) { ... testing::UnitTest::GetInstance()->listeners().Append(new ThrowListener); return RUN_ALL_TESTS(); } ``` This listener should be added after other listeners if you have any, otherwise they won't see failed `OnTestPartResult`. #### Asserting on Subroutines As shown above, if your test calls a subroutine that has an `ASSERT_*` failure in it, the test will continue after the subroutine returns. This may not be what you want. Often people want fatal failures to propagate like exceptions. For that googletest offers the following macros: Fatal assertion | Nonfatal assertion | Verifies ------------------------------------- | ------------------------------------- | -------- `ASSERT_NO_FATAL_FAILURE(statement);` | `EXPECT_NO_FATAL_FAILURE(statement);` | `statement` doesn't generate any new fatal failures in the current thread. Only failures in the thread that executes the assertion are checked to determine the result of this type of assertions. If `statement` creates new threads, failures in these threads are ignored. Examples: ```c++ ASSERT_NO_FATAL_FAILURE(Foo()); int i; EXPECT_NO_FATAL_FAILURE({ i = Bar(); }); ``` Assertions from multiple threads are currently not supported on Windows. #### Checking for Failures in the Current Test `HasFatalFailure()` in the `::testing::Test` class returns `true` if an assertion in the current test has suffered a fatal failure. This allows functions to catch fatal failures in a sub-routine and return early. ```c++ class Test { public: ... static bool HasFatalFailure(); }; ``` The typical usage, which basically simulates the behavior of a thrown exception, is: ```c++ TEST(FooTest, Bar) { Subroutine(); // Aborts if Subroutine() had a fatal failure. if (HasFatalFailure()) return; // The following won't be executed. ... } ``` If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test fixture, you must add the `::testing::Test::` prefix, as in: ```c++ if (::testing::Test::HasFatalFailure()) return; ``` Similarly, `HasNonfatalFailure()` returns `true` if the current test has at least one non-fatal failure, and `HasFailure()` returns `true` if the current test has at least one failure of either kind. ## Logging Additional Information In your test code, you can call `RecordProperty("key", value)` to log additional information, where `value` can be either a string or an `int`. The *last* value recorded for a key will be emitted to the [XML output](#generating-an-xml-report) if you specify one. For example, the test ```c++ TEST_F(WidgetUsageTest, MinAndMaxWidgets) { RecordProperty("MaximumWidgets", ComputeMaxUsage()); RecordProperty("MinimumWidgets", ComputeMinUsage()); } ``` will output XML like this: ```xml ... ... ``` > NOTE: > > * `RecordProperty()` is a static member of the `Test` class. Therefore it > needs to be prefixed with `::testing::Test::` if used outside of the > `TEST` body and the test fixture class. > * `*key*` must be a valid XML attribute name, and cannot conflict with the > ones already used by googletest (`name`, `status`, `time`, `classname`, > `type_param`, and `value_param`). > * Calling `RecordProperty()` outside of the lifespan of a test is allowed. > If it's called outside of a test but between a test suite's > `SetUpTestSuite()` and `TearDownTestSuite()` methods, it will be > attributed to the XML element for the test suite. If it's called outside > of all test suites (e.g. in a test environment), it will be attributed to > the top-level XML element. ## Sharing Resources Between Tests in the Same Test Suite googletest creates a new test fixture object for each test in order to make tests independent and easier to debug. However, sometimes tests use resources that are expensive to set up, making the one-copy-per-test model prohibitively expensive. If the tests don't change the resource, there's no harm in their sharing a single resource copy. So, in addition to per-test set-up/tear-down, googletest also supports per-test-suite set-up/tear-down. To use it: 1. In your test fixture class (say `FooTest` ), declare as `static` some member variables to hold the shared resources. -1. Outside your test fixture class (typically just below it), define those +2. Outside your test fixture class (typically just below it), define those member variables, optionally giving them initial values. -1. In the same test fixture class, define a `static void SetUpTestSuite()` +3. In the same test fixture class, define a `static void SetUpTestSuite()` function (remember not to spell it as **`SetupTestSuite`** with a small `u`!) to set up the shared resources and a `static void TearDownTestSuite()` function to tear them down. That's it! googletest automatically calls `SetUpTestSuite()` before running the *first test* in the `FooTest` test suite (i.e. before creating the first `FooTest` object), and calls `TearDownTestSuite()` after running the *last test* in it (i.e. after deleting the last `FooTest` object). In between, the tests can use the shared resources. Remember that the test order is undefined, so your code can't depend on a test preceding or following another. Also, the tests must either not modify the state of any shared resource, or, if they do modify the state, they must restore the state to its original value before passing control to the next test. Here's an example of per-test-suite set-up and tear-down: ```c++ class FooTest : public ::testing::Test { protected: // Per-test-suite set-up. // Called before the first test in this test suite. // Can be omitted if not needed. static void SetUpTestSuite() { shared_resource_ = new ...; } // Per-test-suite tear-down. // Called after the last test in this test suite. // Can be omitted if not needed. static void TearDownTestSuite() { delete shared_resource_; shared_resource_ = NULL; } // You can define per-test set-up logic as usual. virtual void SetUp() { ... } // You can define per-test tear-down logic as usual. virtual void TearDown() { ... } // Some expensive resource shared by all tests. static T* shared_resource_; }; T* FooTest::shared_resource_ = NULL; TEST_F(FooTest, Test1) { ... you can refer to shared_resource_ here ... } TEST_F(FooTest, Test2) { ... you can refer to shared_resource_ here ... } ``` NOTE: Though the above code declares `SetUpTestSuite()` protected, it may sometimes be necessary to declare it public, such as when using it with `TEST_P`. ## Global Set-Up and Tear-Down Just as you can do set-up and tear-down at the test level and the test suite level, you can also do it at the test program level. Here's how. First, you subclass the `::testing::Environment` class to define a test environment, which knows how to set-up and tear-down: ```c++ class Environment { public: virtual ~Environment() {} // Override this to define how to set up the environment. virtual void SetUp() {} // Override this to define how to tear down the environment. virtual void TearDown() {} }; ``` Then, you register an instance of your environment class with googletest by calling the `::testing::AddGlobalTestEnvironment()` function: ```c++ Environment* AddGlobalTestEnvironment(Environment* env); ``` Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method of each environment object, then runs the tests if none of the environments reported fatal failures and `GTEST_SKIP()` was not called. `RUN_ALL_TESTS()` always calls `TearDown()` with each environment object, regardless of whether or not the tests were run. It's OK to register multiple environment objects. In this suite, their `SetUp()` will be called in the order they are registered, and their `TearDown()` will be called in the reverse order. Note that googletest takes ownership of the registered environment objects. Therefore **do not delete them** by yourself. You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called, probably in `main()`. If you use `gtest_main`, you need to call this before `main()` starts for it to take effect. One way to do this is to define a global variable like this: ```c++ ::testing::Environment* const foo_env = ::testing::AddGlobalTestEnvironment(new FooEnvironment); ``` However, we strongly recommend you to write your own `main()` and call `AddGlobalTestEnvironment()` there, as relying on initialization of global variables makes the code harder to read and may cause problems when you register multiple environments from different translation units and the environments have dependencies among them (remember that the compiler doesn't guarantee the order in which global variables from different translation units are initialized). ## Value-Parameterized Tests *Value-parameterized tests* allow you to test your code with different parameters without writing multiple copies of the same test. This is useful in a number of situations, for example: * You have a piece of code whose behavior is affected by one or more command-line flags. You want to make sure your code performs correctly for various values of those flags. * You want to test different implementations of an OO interface. * You want to test your code over various inputs (a.k.a. data-driven testing). This feature is easy to abuse, so please exercise your good sense when doing it! ### How to Write Value-Parameterized Tests To write value-parameterized tests, first you should define a fixture class. It must be derived from both `testing::Test` and `testing::WithParamInterface` (the latter is a pure interface), where `T` is the type of your parameter values. For convenience, you can just derive the fixture class from `testing::TestWithParam`, which itself is derived from both `testing::Test` and `testing::WithParamInterface`. `T` can be any copyable type. If it's a raw pointer, you are responsible for managing the lifespan of the pointed values. NOTE: If your test fixture defines `SetUpTestSuite()` or `TearDownTestSuite()` they must be declared **public** rather than **protected** in order to use `TEST_P`. ```c++ class FooTest : public testing::TestWithParam { // You can implement all the usual fixture class members here. // To access the test parameter, call GetParam() from class // TestWithParam. }; // Or, when you want to add parameters to a pre-existing fixture class: class BaseTest : public testing::Test { ... }; class BarTest : public BaseTest, public testing::WithParamInterface { ... }; ``` Then, use the `TEST_P` macro to define as many test patterns using this fixture as you want. The `_P` suffix is for "parameterized" or "pattern", whichever you prefer to think. ```c++ TEST_P(FooTest, DoesBlah) { // Inside a test, access the test parameter with the GetParam() method // of the TestWithParam class: EXPECT_TRUE(foo.Blah(GetParam())); ... } TEST_P(FooTest, HasBlahBlah) { ... } ``` Finally, you can use `INSTANTIATE_TEST_SUITE_P` to instantiate the test suite with any set of parameters you want. googletest defines a number of functions for generating test parameters. They return what we call (surprise!) *parameter generators*. Here is a summary of them, which are all in the `testing` namespace: | Parameter Generator | Behavior | | ---------------------------- | ------------------------------------------- | | `Range(begin, end [, step])` | Yields values `{begin, begin+step, | : : begin+step+step, ...}`. The values do not : : : include `end`. `step` defaults to 1. : | `Values(v1, v2, ..., vN)` | Yields values `{v1, v2, ..., vN}`. | | `ValuesIn(container)` and | Yields values from a C-style array, an | : `ValuesIn(begin,end)` : STL-style container, or an iterator range : : : `[begin, end)`. : | `Bool()` | Yields sequence `{false, true}`. | | `Combine(g1, g2, ..., gN)` | Yields all combinations (Cartesian product) | : : as std\:\:tuples of the values generated by : : : the `N` generators. : For more details, see the comments at the definitions of these functions. The following statement will instantiate tests from the `FooTest` test suite each with parameter values `"meeny"`, `"miny"`, and `"moe"`. ```c++ INSTANTIATE_TEST_SUITE_P(InstantiationName, FooTest, testing::Values("meeny", "miny", "moe")); ``` NOTE: The code above must be placed at global or namespace scope, not at function scope. NOTE: Don't forget this step! If you do your test will silently pass, but none of its suites will ever run! To distinguish different instances of the pattern (yes, you can instantiate it more than once), the first argument to `INSTANTIATE_TEST_SUITE_P` is a prefix that will be added to the actual test suite name. Remember to pick unique prefixes for different instantiations. The tests from the instantiation above will have these names: * `InstantiationName/FooTest.DoesBlah/0` for `"meeny"` * `InstantiationName/FooTest.DoesBlah/1` for `"miny"` * `InstantiationName/FooTest.DoesBlah/2` for `"moe"` * `InstantiationName/FooTest.HasBlahBlah/0` for `"meeny"` * `InstantiationName/FooTest.HasBlahBlah/1` for `"miny"` * `InstantiationName/FooTest.HasBlahBlah/2` for `"moe"` You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests). This statement will instantiate all tests from `FooTest` again, each with parameter values `"cat"` and `"dog"`: ```c++ const char* pets[] = {"cat", "dog"}; INSTANTIATE_TEST_SUITE_P(AnotherInstantiationName, FooTest, testing::ValuesIn(pets)); ``` The tests from the instantiation above will have these names: * `AnotherInstantiationName/FooTest.DoesBlah/0` for `"cat"` * `AnotherInstantiationName/FooTest.DoesBlah/1` for `"dog"` * `AnotherInstantiationName/FooTest.HasBlahBlah/0` for `"cat"` * `AnotherInstantiationName/FooTest.HasBlahBlah/1` for `"dog"` Please note that `INSTANTIATE_TEST_SUITE_P` will instantiate *all* tests in the given test suite, whether their definitions come before or *after* the `INSTANTIATE_TEST_SUITE_P` statement. You can see sample7_unittest.cc and sample8_unittest.cc for more examples. ### Creating Value-Parameterized Abstract Tests In the above, we define and instantiate `FooTest` in the *same* source file. Sometimes you may want to define value-parameterized tests in a library and let other people instantiate them later. This pattern is known as *abstract tests*. As an example of its application, when you are designing an interface you can write a standard suite of abstract tests (perhaps using a factory function as the test parameter) that all implementations of the interface are expected to pass. When someone implements the interface, they can instantiate your suite to get all the interface-conformance tests for free. To define abstract tests, you should organize your code like this: 1. Put the definition of the parameterized test fixture class (e.g. `FooTest`) in a header file, say `foo_param_test.h`. Think of this as *declaring* your abstract tests. -1. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes +2. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes `foo_param_test.h`. Think of this as *implementing* your abstract tests. Once they are defined, you can instantiate them by including `foo_param_test.h`, invoking `INSTANTIATE_TEST_SUITE_P()`, and depending on the library target that contains `foo_param_test.cc`. You can instantiate the same abstract test suite multiple times, possibly in different source files. ### Specifying Names for Value-Parameterized Test Parameters The optional last argument to `INSTANTIATE_TEST_SUITE_P()` allows the user to specify a function or functor that generates custom test name suffixes based on the test parameters. The function should accept one argument of type `testing::TestParamInfo`, and return `std::string`. `testing::PrintToStringParamName` is a builtin test suffix generator that returns the value of `testing::PrintToString(GetParam())`. It does not work for `std::string` or C strings. NOTE: test names must be non-empty, unique, and may only contain ASCII alphanumeric characters. In particular, they [should not contain underscores](https://github.com/google/googletest/blob/master/googletest/docs/faq.md#why-should-test-suite-names-and-test-names-not-contain-underscore) ```c++ class MyTestSuite : public testing::TestWithParam {}; TEST_P(MyTestSuite, MyTest) { std::cout << "Example Test Param: " << GetParam() << std::endl; } INSTANTIATE_TEST_SUITE_P(MyGroup, MyTestSuite, testing::Range(0, 10), testing::PrintToStringParamName()); ``` Providing a custom functor allows for more control over test parameter name generation, especially for types where the automatic conversion does not generate helpful parameter names (e.g. strings as demonstrated above). The following example illustrates this for multiple parameters, an enumeration type and a string, and also demonstrates how to combine generators. It uses a lambda for conciseness: ```c++ enum class MyType { MY_FOO = 0, MY_BAR = 1 }; class MyTestSuite : public testing::TestWithParam> { }; INSTANTIATE_TEST_SUITE_P( MyGroup, MyTestSuite, testing::Combine( testing::Values(MyType::VALUE_0, MyType::VALUE_1), testing::ValuesIn("", "")), [](const testing::TestParamInfo& info) { string name = absl::StrCat( std::get<0>(info.param) == MY_FOO ? "Foo" : "Bar", "_", std::get<1>(info.param)); absl::c_replace_if(name, [](char c) { return !std::isalnum(c); }, '_'); return name; }); ``` ## Typed Tests Suppose you have multiple implementations of the same interface and want to make sure that all of them satisfy some common requirements. Or, you may have defined several types that are supposed to conform to the same "concept" and you want to verify it. In both cases, you want the same test logic repeated for different types. While you can write one `TEST` or `TEST_F` for each type you want to test (and you may even factor the test logic into a function template that you invoke from the `TEST`), it's tedious and doesn't scale: if you want `m` tests over `n` types, you'll end up writing `m*n` `TEST`s. *Typed tests* allow you to repeat the same test logic over a list of types. You only need to write the test logic once, although you must know the type list when writing typed tests. Here's how you do it: First, define a fixture class template. It should be parameterized by a type. Remember to derive it from `::testing::Test`: ```c++ template class FooTest : public ::testing::Test { public: ... typedef std::list List; static T shared_; T value_; }; ``` Next, associate a list of types with the test suite, which will be repeated for each type in the list: ```c++ using MyTypes = ::testing::Types; TYPED_TEST_SUITE(FooTest, MyTypes); ``` The type alias (`using` or `typedef`) is necessary for the `TYPED_TEST_SUITE` macro to parse correctly. Otherwise the compiler will think that each comma in the type list introduces a new macro argument. Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test for this test suite. You can repeat this as many times as you want: ```c++ TYPED_TEST(FooTest, DoesBlah) { // Inside a test, refer to the special name TypeParam to get the type // parameter. Since we are inside a derived class template, C++ requires // us to visit the members of FooTest via 'this'. TypeParam n = this->value_; // To visit static members of the fixture, add the 'TestFixture::' // prefix. n += TestFixture::shared_; // To refer to typedefs in the fixture, add the 'typename TestFixture::' // prefix. The 'typename' is required to satisfy the compiler. typename TestFixture::List values; values.push_back(n); ... } TYPED_TEST(FooTest, HasPropertyA) { ... } ``` You can see sample6_unittest.cc ## Type-Parameterized Tests *Type-parameterized tests* are like typed tests, except that they don't require you to know the list of types ahead of time. Instead, you can define the test logic first and instantiate it with different type lists later. You can even instantiate it more than once in the same program. If you are designing an interface or concept, you can define a suite of type-parameterized tests to verify properties that any valid implementation of the interface/concept should have. Then, the author of each implementation can just instantiate the test suite with their type to verify that it conforms to the requirements, without having to write similar tests repeatedly. Here's an example: First, define a fixture class template, as we did with typed tests: ```c++ template class FooTest : public ::testing::Test { ... }; ``` Next, declare that you will define a type-parameterized test suite: ```c++ TYPED_TEST_SUITE_P(FooTest); ``` Then, use `TYPED_TEST_P()` to define a type-parameterized test. You can repeat this as many times as you want: ```c++ TYPED_TEST_P(FooTest, DoesBlah) { // Inside a test, refer to TypeParam to get the type parameter. TypeParam n = 0; ... } TYPED_TEST_P(FooTest, HasPropertyA) { ... } ``` Now the tricky part: you need to register all test patterns using the `REGISTER_TYPED_TEST_SUITE_P` macro before you can instantiate them. The first argument of the macro is the test suite name; the rest are the names of the tests in this test suite: ```c++ REGISTER_TYPED_TEST_SUITE_P(FooTest, DoesBlah, HasPropertyA); ``` Finally, you are free to instantiate the pattern with the types you want. If you put the above code in a header file, you can `#include` it in multiple C++ source files and instantiate it multiple times. ```c++ typedef ::testing::Types MyTypes; INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, MyTypes); ``` To distinguish different instances of the pattern, the first argument to the `INSTANTIATE_TYPED_TEST_SUITE_P` macro is a prefix that will be added to the actual test suite name. Remember to pick unique prefixes for different instances. In the special case where the type list contains only one type, you can write that type directly without `::testing::Types<...>`, like this: ```c++ INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, int); ``` You can see `sample6_unittest.cc` for a complete example. ## Testing Private Code If you change your software's internal implementation, your tests should not break as long as the change is not observable by users. Therefore, **per the black-box testing principle, most of the time you should test your code through its public interfaces.** **If you still find yourself needing to test internal implementation code, consider if there's a better design.** The desire to test internal implementation is often a sign that the class is doing too much. Consider extracting an implementation class, and testing it. Then use that implementation class in the original class. If you absolutely have to test non-public interface code though, you can. There are two cases to consider: * Static functions ( *not* the same as static member functions!) or unnamed namespaces, and * Private or protected class members To test them, we use the following special techniques: * Both static functions and definitions/declarations in an unnamed namespace are only visible within the same translation unit. To test them, you can `#include` the entire `.cc` file being tested in your `*_test.cc` file. (#including `.cc` files is not a good way to reuse code - you should not do this in production code!) However, a better approach is to move the private code into the `foo::internal` namespace, where `foo` is the namespace your project normally uses, and put the private declarations in a `*-internal.h` file. Your production `.cc` files and your tests are allowed to include this internal header, but your clients are not. This way, you can fully test your internal implementation without leaking it to your clients. * Private class members are only accessible from within the class or by friends. To access a class' private members, you can declare your test fixture as a friend to the class and define accessors in your fixture. Tests using the fixture can then access the private members of your production class via the accessors in the fixture. Note that even though your fixture is a friend to your production class, your tests are not automatically friends to it, as they are technically defined in sub-classes of the fixture. Another way to test private members is to refactor them into an implementation class, which is then declared in a `*-internal.h` file. Your clients aren't allowed to include this header but your tests can. Such is called the [Pimpl](https://www.gamedev.net/articles/programming/general-and-gameplay-programming/the-c-pimpl-r1794/) (Private Implementation) idiom. Or, you can declare an individual test as a friend of your class by adding this line in the class body: ```c++ FRIEND_TEST(TestSuiteName, TestName); ``` For example, ```c++ // foo.h class Foo { ... private: FRIEND_TEST(FooTest, BarReturnsZeroOnNull); int Bar(void* x); }; // foo_test.cc ... TEST(FooTest, BarReturnsZeroOnNull) { Foo foo; EXPECT_EQ(foo.Bar(NULL), 0); // Uses Foo's private member Bar(). } ``` Pay special attention when your class is defined in a namespace, as you should define your test fixtures and tests in the same namespace if you want them to be friends of your class. For example, if the code to be tested looks like: ```c++ namespace my_namespace { class Foo { friend class FooTest; FRIEND_TEST(FooTest, Bar); FRIEND_TEST(FooTest, Baz); ... definition of the class Foo ... }; } // namespace my_namespace ``` Your test code should be something like: ```c++ namespace my_namespace { class FooTest : public ::testing::Test { protected: ... }; TEST_F(FooTest, Bar) { ... } TEST_F(FooTest, Baz) { ... } } // namespace my_namespace ``` ## "Catching" Failures If you are building a testing utility on top of googletest, you'll want to test your utility. What framework would you use to test it? googletest, of course. The challenge is to verify that your testing utility reports failures correctly. In frameworks that report a failure by throwing an exception, you could catch the exception and assert on it. But googletest doesn't use exceptions, so how do we test that a piece of code generates an expected failure? gunit-spi.h contains some constructs to do this. After #including this header, you can use ```c++ EXPECT_FATAL_FAILURE(statement, substring); ``` to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the current thread whose message contains the given `substring`, or use ```c++ EXPECT_NONFATAL_FAILURE(statement, substring); ``` if you are expecting a non-fatal (e.g. `EXPECT_*`) failure. Only failures in the current thread are checked to determine the result of this type of expectations. If `statement` creates new threads, failures in these threads are also ignored. If you want to catch failures in other threads as well, use one of the following macros instead: ```c++ EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring); EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring); ``` NOTE: Assertions from multiple threads are currently not supported on Windows. For technical reasons, there are some caveats: 1. You cannot stream a failure message to either macro. -1. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference +2. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference local non-static variables or non-static members of `this` object. -1. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot return a +3. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot return a value. ## Registering tests programmatically The `TEST` macros handle the vast majority of all use cases, but there are few were runtime registration logic is required. For those cases, the framework provides the `::testing::RegisterTest` that allows callers to register arbitrary tests dynamically. This is an advanced API only to be used when the `TEST` macros are insufficient. The macros should be preferred when possible, as they avoid most of the complexity of calling this function. It provides the following signature: ```c++ template TestInfo* RegisterTest(const char* test_suite_name, const char* test_name, const char* type_param, const char* value_param, const char* file, int line, Factory factory); ``` The `factory` argument is a factory callable (move-constructible) object or function pointer that creates a new instance of the Test object. It handles ownership to the caller. The signature of the callable is `Fixture*()`, where `Fixture` is the test fixture class for the test. All tests registered with the same `test_suite_name` must return the same fixture type. This is checked at runtime. The framework will infer the fixture class from the factory and will call the `SetUpTestSuite` and `TearDownTestSuite` for it. Must be called before `RUN_ALL_TESTS()` is invoked, otherwise behavior is undefined. Use case example: ```c++ class MyFixture : public ::testing::Test { public: // All of these optional, just like in regular macro usage. static void SetUpTestSuite() { ... } static void TearDownTestSuite() { ... } void SetUp() override { ... } void TearDown() override { ... } }; class MyTest : public MyFixture { public: explicit MyTest(int data) : data_(data) {} void TestBody() override { ... } private: int data_; }; void RegisterMyTests(const std::vector& values) { for (int v : values) { ::testing::RegisterTest( "MyFixture", ("Test" + std::to_string(v)).c_str(), nullptr, std::to_string(v).c_str(), __FILE__, __LINE__, // Important to use the fixture type as the return type here. [=]() -> MyFixture* { return new MyTest(v); }); } } ... int main(int argc, char** argv) { std::vector values_to_test = LoadValuesFromConfig(); RegisterMyTests(values_to_test); ... return RUN_ALL_TESTS(); } ``` ## Getting the Current Test's Name Sometimes a function may need to know the name of the currently running test. For example, you may be using the `SetUp()` method of your test fixture to set the golden file name based on which test is running. The `::testing::TestInfo` class has this information: ```c++ namespace testing { class TestInfo { public: // Returns the test suite name and the test name, respectively. // // Do NOT delete or free the return value - it's managed by the // TestInfo class. const char* test_suite_name() const; const char* name() const; }; } ``` To obtain a `TestInfo` object for the currently running test, call `current_test_info()` on the `UnitTest` singleton object: ```c++ // Gets information about the currently running test. // Do NOT delete the returned object - it's managed by the UnitTest class. const ::testing::TestInfo* const test_info = ::testing::UnitTest::GetInstance()->current_test_info(); printf("We are in test %s of test suite %s.\n", test_info->name(), test_info->test_suite_name()); ``` `current_test_info()` returns a null pointer if no test is running. In particular, you cannot find the test suite name in `TestSuiteSetUp()`, `TestSuiteTearDown()` (where you know the test suite name implicitly), or functions called from them. ## Extending googletest by Handling Test Events googletest provides an **event listener API** to let you receive notifications about the progress of a test program and test failures. The events you can listen to include the start and end of the test program, a test suite, or a test method, among others. You may use this API to augment or replace the standard console output, replace the XML output, or provide a completely different form of output, such as a GUI or a database. You can also use test events as checkpoints to implement a resource leak checker, for example. ### Defining Event Listeners To define a event listener, you subclass either testing::TestEventListener or testing::EmptyTestEventListener The former is an (abstract) interface, where *each pure virtual method can be overridden to handle a test event* (For example, when a test starts, the `OnTestStart()` method will be called.). The latter provides an empty implementation of all methods in the interface, such that a subclass only needs to override the methods it cares about. When an event is fired, its context is passed to the handler function as an argument. The following argument types are used: * UnitTest reflects the state of the entire test program, * TestSuite has information about a test suite, which can contain one or more tests, * TestInfo contains the state of a test, and * TestPartResult represents the result of a test assertion. An event handler function can examine the argument it receives to find out interesting information about the event and the test program's state. Here's an example: ```c++ class MinimalistPrinter : public ::testing::EmptyTestEventListener { // Called before a test starts. virtual void OnTestStart(const ::testing::TestInfo& test_info) { printf("*** Test %s.%s starting.\n", test_info.test_suite_name(), test_info.name()); } // Called after a failed assertion or a SUCCESS(). virtual void OnTestPartResult(const ::testing::TestPartResult& test_part_result) { printf("%s in %s:%d\n%s\n", test_part_result.failed() ? "*** Failure" : "Success", test_part_result.file_name(), test_part_result.line_number(), test_part_result.summary()); } // Called after a test ends. virtual void OnTestEnd(const ::testing::TestInfo& test_info) { printf("*** Test %s.%s ending.\n", test_info.test_suite_name(), test_info.name()); } }; ``` ### Using Event Listeners To use the event listener you have defined, add an instance of it to the googletest event listener list (represented by class TestEventListeners - note the "s" at the end of the name) in your `main()` function, before calling `RUN_ALL_TESTS()`: ```c++ int main(int argc, char** argv) { ::testing::InitGoogleTest(&argc, argv); // Gets hold of the event listener list. ::testing::TestEventListeners& listeners = ::testing::UnitTest::GetInstance()->listeners(); // Adds a listener to the end. googletest takes the ownership. listeners.Append(new MinimalistPrinter); return RUN_ALL_TESTS(); } ``` There's only one problem: the default test result printer is still in effect, so its output will mingle with the output from your minimalist printer. To suppress the default printer, just release it from the event listener list and delete it. You can do so by adding one line: ```c++ ... delete listeners.Release(listeners.default_result_printer()); listeners.Append(new MinimalistPrinter); return RUN_ALL_TESTS(); ``` Now, sit back and enjoy a completely different output from your tests. For more details, you can read this sample9_unittest.cc You may append more than one listener to the list. When an `On*Start()` or `OnTestPartResult()` event is fired, the listeners will receive it in the order they appear in the list (since new listeners are added to the end of the list, the default text printer and the default XML generator will receive the event first). An `On*End()` event will be received by the listeners in the *reverse* order. This allows output by listeners added later to be framed by output from listeners added earlier. ### Generating Failures in Listeners You may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`, `FAIL()`, etc) when processing an event. There are some restrictions: 1. You cannot generate any failure in `OnTestPartResult()` (otherwise it will cause `OnTestPartResult()` to be called recursively). -1. A listener that handles `OnTestPartResult()` is not allowed to generate any +2. A listener that handles `OnTestPartResult()` is not allowed to generate any failure. When you add listeners to the listener list, you should put listeners that handle `OnTestPartResult()` *before* listeners that can generate failures. This ensures that failures generated by the latter are attributed to the right test by the former. We have a sample of failure-raising listener sample10_unittest.cc ## Running Test Programs: Advanced Options googletest test programs are ordinary executables. Once built, you can run them directly and affect their behavior via the following environment variables and/or command line flags. For the flags to work, your programs must call `::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`. To see a list of supported flags and their usage, please run your test program with the `--help` flag. You can also use `-h`, `-?`, or `/?` for short. If an option is specified both by an environment variable and by a flag, the latter takes precedence. ### Selecting Tests #### Listing Test Names Sometimes it is necessary to list the available tests in a program before running them so that a filter may be applied if needed. Including the flag `--gtest_list_tests` overrides all other flags and lists tests in the following format: ```none TestSuite1. TestName1 TestName2 TestSuite2. TestName ``` None of the tests listed are actually run if the flag is provided. There is no corresponding environment variable for this flag. #### Running a Subset of the Tests By default, a googletest program runs all tests the user has defined. Sometimes, you want to run only a subset of the tests (e.g. for debugging or quickly verifying a change). If you set the `GTEST_FILTER` environment variable or the `--gtest_filter` flag to a filter string, googletest will only run the tests whose full names (in the form of `TestSuiteName.TestName`) match the filter. The format of a filter is a '`:`'-separated list of wildcard patterns (called the *positive patterns*) optionally followed by a '`-`' and another '`:`'-separated pattern list (called the *negative patterns*). A test matches the filter if and only if it matches any of the positive patterns but does not match any of the negative patterns. A pattern may contain `'*'` (matches any string) or `'?'` (matches any single character). For convenience, the filter `'*-NegativePatterns'` can be also written as `'-NegativePatterns'`. For example: * `./foo_test` Has no flag, and thus runs all its tests. * `./foo_test --gtest_filter=*` Also runs everything, due to the single match-everything `*` value. * `./foo_test --gtest_filter=FooTest.*` Runs everything in test suite `FooTest` . * `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full name contains either `"Null"` or `"Constructor"` . * `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests. * `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test suite `FooTest` except `FooTest.Bar`. * `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs everything in test suite `FooTest` except `FooTest.Bar` and everything in test suite `BarTest` except `BarTest.Foo`. #### Temporarily Disabling Tests If you have a broken test that you cannot fix right away, you can add the `DISABLED_` prefix to its name. This will exclude it from execution. This is better than commenting out the code or using `#if 0`, as disabled tests are still compiled (and thus won't rot). If you need to disable all tests in a test suite, you can either add `DISABLED_` to the front of the name of each test, or alternatively add it to the front of the test suite name. For example, the following tests won't be run by googletest, even though they will still be compiled: ```c++ // Tests that Foo does Abc. TEST(FooTest, DISABLED_DoesAbc) { ... } class DISABLED_BarTest : public ::testing::Test { ... }; // Tests that Bar does Xyz. TEST_F(DISABLED_BarTest, DoesXyz) { ... } ``` NOTE: This feature should only be used for temporary pain-relief. You still have to fix the disabled tests at a later date. As a reminder, googletest will print a banner warning you if a test program contains any disabled tests. TIP: You can easily count the number of disabled tests you have using `gsearch` and/or `grep`. This number can be used as a metric for improving your test quality. #### Temporarily Enabling Disabled Tests To include disabled tests in test execution, just invoke the test program with the `--gtest_also_run_disabled_tests` flag or set the `GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`. You can combine this with the `--gtest_filter` flag to further select which disabled tests to run. ### Repeating the Tests Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it will fail only 1% of the time, making it rather hard to reproduce the bug under a debugger. This can be a major source of frustration. The `--gtest_repeat` flag allows you to repeat all (or selected) test methods in a program many times. Hopefully, a flaky test will eventually fail and give you a chance to debug. Here's how to use it: ```none $ foo_test --gtest_repeat=1000 Repeat foo_test 1000 times and don't stop at failures. $ foo_test --gtest_repeat=-1 A negative count means repeating forever. $ foo_test --gtest_repeat=1000 --gtest_break_on_failure Repeat foo_test 1000 times, stopping at the first failure. This is especially useful when running under a debugger: when the test fails, it will drop into the debugger and you can then inspect variables and stacks. $ foo_test --gtest_repeat=1000 --gtest_filter=FooBar.* Repeat the tests whose name matches the filter 1000 times. ``` If your test program contains [global set-up/tear-down](#global-set-up-and-tear-down) code, it will be repeated in each iteration as well, as the flakiness may be in it. You can also specify the repeat count by setting the `GTEST_REPEAT` environment variable. ### Shuffling the Tests You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE` environment variable to `1`) to run the tests in a program in a random order. This helps to reveal bad dependencies between tests. By default, googletest uses a random seed calculated from the current time. Therefore you'll get a different order every time. The console output includes the random seed value, such that you can reproduce an order-related test failure later. To specify the random seed explicitly, use the `--gtest_random_seed=SEED` flag (or set the `GTEST_RANDOM_SEED` environment variable), where `SEED` is an integer in the range [0, 99999]. The seed value 0 is special: it tells googletest to do the default behavior of calculating the seed from the current time. If you combine this with `--gtest_repeat=N`, googletest will pick a different random seed and re-shuffle the tests in each iteration. ### Controlling Test Output #### Colored Terminal Output googletest can use colors in its terminal output to make it easier to spot the important information: ...
[----------] 1 test from FooTest
[ RUN      ] FooTest.DoesAbc
[       OK ] FooTest.DoesAbc
[----------] 2 tests from BarTest
[ RUN      ] BarTest.HasXyzProperty
[       OK ] BarTest.HasXyzProperty
[ RUN      ] BarTest.ReturnsTrueOnSuccess ... some error messages ...
[   FAILED ] BarTest.ReturnsTrueOnSuccess ...
[==========] 30 tests from 14 test suites ran.
[   PASSED ] 28 tests.
[   FAILED ] 2 tests, listed below:
[   FAILED ] BarTest.ReturnsTrueOnSuccess
[   FAILED ] AnotherTest.DoesXyz

2 FAILED TESTS
You can set the `GTEST_COLOR` environment variable or the `--gtest_color` command line flag to `yes`, `no`, or `auto` (the default) to enable colors, disable colors, or let googletest decide. When the value is `auto`, googletest will use colors if and only if the output goes to a terminal and (on non-Windows platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`. #### Suppressing the Elapsed Time By default, googletest prints the time it takes to run each test. To disable that, run the test program with the `--gtest_print_time=0` command line flag, or set the GTEST_PRINT_TIME environment variable to `0`. #### Suppressing UTF-8 Text Output In case of assertion failures, googletest prints expected and actual values of type `string` both as hex-encoded strings as well as in readable UTF-8 text if they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8 text because, for example, you don't have an UTF-8 compatible output medium, run the test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8` environment variable to `0`. #### Generating an XML Report googletest can emit a detailed XML report to a file in addition to its normal textual output. The report contains the duration of each test, and thus can help you identify slow tests. The report is also used by the http://unittest dashboard to show per-test-method error messages. To generate the XML report, set the `GTEST_OUTPUT` environment variable or the `--gtest_output` flag to the string `"xml:path_to_output_file"`, which will create the file at the given location. You can also just use the string `"xml"`, in which case the output can be found in the `test_detail.xml` file in the current directory. If you specify a directory (for example, `"xml:output/directory/"` on Linux or `"xml:output\directory\"` on Windows), googletest will create the XML file in that directory, named after the test executable (e.g. `foo_test.xml` for test program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left over from a previous run), googletest will pick a different name (e.g. `foo_test_1.xml`) to avoid overwriting it. The report is based on the `junitreport` Ant task. Since that format was originally intended for Java, a little interpretation is required to make it apply to googletest tests, as shown here: ```xml ``` * The root `` element corresponds to the entire test program. * `` elements correspond to googletest test suites. * `` elements correspond to googletest test functions. For instance, the following program ```c++ TEST(MathTest, Addition) { ... } TEST(MathTest, Subtraction) { ... } TEST(LogicTest, NonContradiction) { ... } ``` could generate this report: ```xml ... ... ``` Things to note: * The `tests` attribute of a `` or `` element tells how many test functions the googletest program or test suite contains, while the `failures` attribute tells how many of them failed. * The `time` attribute expresses the duration of the test, test suite, or entire test program in seconds. * The `timestamp` attribute records the local date and time of the test execution. * Each `` element corresponds to a single failed googletest assertion. #### Generating a JSON Report googletest can also emit a JSON report as an alternative format to XML. To generate the JSON report, set the `GTEST_OUTPUT` environment variable or the `--gtest_output` flag to the string `"json:path_to_output_file"`, which will create the file at the given location. You can also just use the string `"json"`, in which case the output can be found in the `test_detail.json` file in the current directory. The report format conforms to the following JSON Schema: ```json { "$schema": "http://json-schema.org/schema#", "type": "object", "definitions": { "TestCase": { "type": "object", "properties": { "name": { "type": "string" }, "tests": { "type": "integer" }, "failures": { "type": "integer" }, "disabled": { "type": "integer" }, "time": { "type": "string" }, "testsuite": { "type": "array", "items": { "$ref": "#/definitions/TestInfo" } } } }, "TestInfo": { "type": "object", "properties": { "name": { "type": "string" }, "status": { "type": "string", "enum": ["RUN", "NOTRUN"] }, "time": { "type": "string" }, "classname": { "type": "string" }, "failures": { "type": "array", "items": { "$ref": "#/definitions/Failure" } } } }, "Failure": { "type": "object", "properties": { "failures": { "type": "string" }, "type": { "type": "string" } } } }, "properties": { "tests": { "type": "integer" }, "failures": { "type": "integer" }, "disabled": { "type": "integer" }, "errors": { "type": "integer" }, "timestamp": { "type": "string", "format": "date-time" }, "time": { "type": "string" }, "name": { "type": "string" }, "testsuites": { "type": "array", "items": { "$ref": "#/definitions/TestCase" } } } } ``` The report uses the format that conforms to the following Proto3 using the [JSON encoding](https://developers.google.com/protocol-buffers/docs/proto3#json): ```proto syntax = "proto3"; package googletest; import "google/protobuf/timestamp.proto"; import "google/protobuf/duration.proto"; message UnitTest { int32 tests = 1; int32 failures = 2; int32 disabled = 3; int32 errors = 4; google.protobuf.Timestamp timestamp = 5; google.protobuf.Duration time = 6; string name = 7; repeated TestCase testsuites = 8; } message TestCase { string name = 1; int32 tests = 2; int32 failures = 3; int32 disabled = 4; int32 errors = 5; google.protobuf.Duration time = 6; repeated TestInfo testsuite = 7; } message TestInfo { string name = 1; enum Status { RUN = 0; NOTRUN = 1; } Status status = 2; google.protobuf.Duration time = 3; string classname = 4; message Failure { string failures = 1; string type = 2; } repeated Failure failures = 5; } ``` For instance, the following program ```c++ TEST(MathTest, Addition) { ... } TEST(MathTest, Subtraction) { ... } TEST(LogicTest, NonContradiction) { ... } ``` could generate this report: ```json { "tests": 3, "failures": 1, "errors": 0, "time": "0.035s", "timestamp": "2011-10-31T18:52:42Z", "name": "AllTests", "testsuites": [ { "name": "MathTest", "tests": 2, "failures": 1, "errors": 0, "time": "0.015s", "testsuite": [ { "name": "Addition", "status": "RUN", "time": "0.007s", "classname": "", "failures": [ { "message": "Value of: add(1, 1)\n Actual: 3\nExpected: 2", "type": "" }, { "message": "Value of: add(1, -1)\n Actual: 1\nExpected: 0", "type": "" } ] }, { "name": "Subtraction", "status": "RUN", "time": "0.005s", "classname": "" } ] }, { "name": "LogicTest", "tests": 1, "failures": 0, "errors": 0, "time": "0.005s", "testsuite": [ { "name": "NonContradiction", "status": "RUN", "time": "0.005s", "classname": "" } ] } ] } ``` IMPORTANT: The exact format of the JSON document is subject to change. ### Controlling How Failures Are Reported #### Turning Assertion Failures into Break-Points When running test programs under a debugger, it's very convenient if the debugger can catch an assertion failure and automatically drop into interactive mode. googletest's *break-on-failure* mode supports this behavior. To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value other than `0`. Alternatively, you can use the `--gtest_break_on_failure` command line flag. #### Disabling Catching Test-Thrown Exceptions googletest can be used either with or without exceptions enabled. If a test throws a C++ exception or (on Windows) a structured exception (SEH), by default googletest catches it, reports it as a test failure, and continues with the next test method. This maximizes the coverage of a test run. Also, on Windows an uncaught exception will cause a pop-up window, so catching the exceptions allows you to run the tests automatically. When debugging the test failures, however, you may instead want the exceptions to be handled by the debugger, such that you can examine the call stack when an exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS` environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag when running the tests. diff --git a/googletest/docs/faq.md b/googletest/docs/faq.md index 0e9cfeeb..d6e7f54a 100644 --- a/googletest/docs/faq.md +++ b/googletest/docs/faq.md @@ -1,753 +1,753 @@ # Googletest FAQ ## Why should test suite names and test names not contain underscore? Underscore (`_`) is special, as C++ reserves the following to be used by the compiler and the standard library: 1. any identifier that starts with an `_` followed by an upper-case letter, and -1. any identifier that contains two consecutive underscores (i.e. `__`) +2. any identifier that contains two consecutive underscores (i.e. `__`) *anywhere* in its name. User code is *prohibited* from using such identifiers. Now let's look at what this means for `TEST` and `TEST_F`. Currently `TEST(TestSuiteName, TestName)` generates a class named `TestSuiteName_TestName_Test`. What happens if `TestSuiteName` or `TestName` contains `_`? 1. If `TestSuiteName` starts with an `_` followed by an upper-case letter (say, `_Foo`), we end up with `_Foo_TestName_Test`, which is reserved and thus invalid. -1. If `TestSuiteName` ends with an `_` (say, `Foo_`), we get +2. If `TestSuiteName` ends with an `_` (say, `Foo_`), we get `Foo__TestName_Test`, which is invalid. -1. If `TestName` starts with an `_` (say, `_Bar`), we get +3. If `TestName` starts with an `_` (say, `_Bar`), we get `TestSuiteName__Bar_Test`, which is invalid. -1. If `TestName` ends with an `_` (say, `Bar_`), we get +4. If `TestName` ends with an `_` (say, `Bar_`), we get `TestSuiteName_Bar__Test`, which is invalid. So clearly `TestSuiteName` and `TestName` cannot start or end with `_` (Actually, `TestSuiteName` can start with `_` -- as long as the `_` isn't followed by an upper-case letter. But that's getting complicated. So for simplicity we just say that it cannot start with `_`.). It may seem fine for `TestSuiteName` and `TestName` to contain `_` in the middle. However, consider this: ```c++ TEST(Time, Flies_Like_An_Arrow) { ... } TEST(Time_Flies, Like_An_Arrow) { ... } ``` Now, the two `TEST`s will both generate the same class (`Time_Flies_Like_An_Arrow_Test`). That's not good. So for simplicity, we just ask the users to avoid `_` in `TestSuiteName` and `TestName`. The rule is more constraining than necessary, but it's simple and easy to remember. It also gives googletest some wiggle room in case its implementation needs to change in the future. If you violate the rule, there may not be immediate consequences, but your test may (just may) break with a new compiler (or a new version of the compiler you are using) or with a new version of googletest. Therefore it's best to follow the rule. ## Why does googletest support `EXPECT_EQ(NULL, ptr)` and `ASSERT_EQ(NULL, ptr)` but not `EXPECT_NE(NULL, ptr)` and `ASSERT_NE(NULL, ptr)`? First of all you can use `EXPECT_NE(nullptr, ptr)` and `ASSERT_NE(nullptr, ptr)`. This is the preferred syntax in the style guide because nullptr does not have the type problems that NULL does. Which is why NULL does not work. Due to some peculiarity of C++, it requires some non-trivial template meta programming tricks to support using `NULL` as an argument of the `EXPECT_XX()` and `ASSERT_XX()` macros. Therefore we only do it where it's most needed (otherwise we make the implementation of googletest harder to maintain and more error-prone than necessary). The `EXPECT_EQ()` macro takes the *expected* value as its first argument and the *actual* value as the second. It's reasonable that someone wants to write `EXPECT_EQ(NULL, some_expression)`, and this indeed was requested several times. Therefore we implemented it. The need for `EXPECT_NE(NULL, ptr)` isn't nearly as strong. When the assertion fails, you already know that `ptr` must be `NULL`, so it doesn't add any information to print `ptr` in this case. That means `EXPECT_TRUE(ptr != NULL)` works just as well. If we were to support `EXPECT_NE(NULL, ptr)`, for consistency we'll have to support `EXPECT_NE(ptr, NULL)` as well, as unlike `EXPECT_EQ`, we don't have a convention on the order of the two arguments for `EXPECT_NE`. This means using the template meta programming tricks twice in the implementation, making it even harder to understand and maintain. We believe the benefit doesn't justify the cost. Finally, with the growth of the gMock matcher library, we are encouraging people to use the unified `EXPECT_THAT(value, matcher)` syntax more often in tests. One significant advantage of the matcher approach is that matchers can be easily combined to form new matchers, while the `EXPECT_NE`, etc, macros cannot be easily combined. Therefore we want to invest more in the matchers than in the `EXPECT_XX()` macros. ## I need to test that different implementations of an interface satisfy some common requirements. Should I use typed tests or value-parameterized tests? For testing various implementations of the same interface, either typed tests or value-parameterized tests can get it done. It's really up to you the user to decide which is more convenient for you, depending on your particular case. Some rough guidelines: * Typed tests can be easier to write if instances of the different implementations can be created the same way, modulo the type. For example, if all these implementations have a public default constructor (such that you can write `new TypeParam`), or if their factory functions have the same form (e.g. `CreateInstance()`). * Value-parameterized tests can be easier to write if you need different code patterns to create different implementations' instances, e.g. `new Foo` vs `new Bar(5)`. To accommodate for the differences, you can write factory function wrappers and pass these function pointers to the tests as their parameters. * When a typed test fails, the default output includes the name of the type, which can help you quickly identify which implementation is wrong. Value-parameterized tests only show the number of the failed iteration by default. You will need to define a function that returns the iteration name and pass it as the third parameter to INSTANTIATE_TEST_SUITE_P to have more useful output. * When using typed tests, you need to make sure you are testing against the interface type, not the concrete types (in other words, you want to make sure `implicit_cast(my_concrete_impl)` works, not just that `my_concrete_impl` works). It's less likely to make mistakes in this area when using value-parameterized tests. I hope I didn't confuse you more. :-) If you don't mind, I'd suggest you to give both approaches a try. Practice is a much better way to grasp the subtle differences between the two tools. Once you have some concrete experience, you can much more easily decide which one to use the next time. ## I got some run-time errors about invalid proto descriptors when using `ProtocolMessageEquals`. Help! **Note:** `ProtocolMessageEquals` and `ProtocolMessageEquiv` are *deprecated* now. Please use `EqualsProto`, etc instead. `ProtocolMessageEquals` and `ProtocolMessageEquiv` were redefined recently and are now less tolerant of invalid protocol buffer definitions. In particular, if you have a `foo.proto` that doesn't fully qualify the type of a protocol message it references (e.g. `message` where it should be `message`), you will now get run-time errors like: ``` ... descriptor.cc:...] Invalid proto descriptor for file "path/to/foo.proto": ... descriptor.cc:...] blah.MyMessage.my_field: ".Bar" is not defined. ``` If you see this, your `.proto` file is broken and needs to be fixed by making the types fully qualified. The new definition of `ProtocolMessageEquals` and `ProtocolMessageEquiv` just happen to reveal your bug. ## My death test modifies some state, but the change seems lost after the death test finishes. Why? Death tests (`EXPECT_DEATH`, etc) are executed in a sub-process s.t. the expected crash won't kill the test program (i.e. the parent process). As a result, any in-memory side effects they incur are observable in their respective sub-processes, but not in the parent process. You can think of them as running in a parallel universe, more or less. In particular, if you use mocking and the death test statement invokes some mock methods, the parent process will think the calls have never occurred. Therefore, you may want to move your `EXPECT_CALL` statements inside the `EXPECT_DEATH` macro. ## EXPECT_EQ(htonl(blah), blah_blah) generates weird compiler errors in opt mode. Is this a googletest bug? Actually, the bug is in `htonl()`. According to `'man htonl'`, `htonl()` is a *function*, which means it's valid to use `htonl` as a function pointer. However, in opt mode `htonl()` is defined as a *macro*, which breaks this usage. Worse, the macro definition of `htonl()` uses a `gcc` extension and is *not* standard C++. That hacky implementation has some ad hoc limitations. In particular, it prevents you from writing `Foo()`, where `Foo` is a template that has an integral argument. The implementation of `EXPECT_EQ(a, b)` uses `sizeof(... a ...)` inside a template argument, and thus doesn't compile in opt mode when `a` contains a call to `htonl()`. It is difficult to make `EXPECT_EQ` bypass the `htonl()` bug, as the solution must work with different compilers on various platforms. `htonl()` has some other problems as described in `//util/endian/endian.h`, which defines `ghtonl()` to replace it. `ghtonl()` does the same thing `htonl()` does, only without its problems. We suggest you to use `ghtonl()` instead of `htonl()`, both in your tests and production code. `//util/endian/endian.h` also defines `ghtons()`, which solves similar problems in `htons()`. Don't forget to add `//util/endian` to the list of dependencies in the `BUILD` file wherever `ghtonl()` and `ghtons()` are used. The library consists of a single header file and will not bloat your binary. ## The compiler complains about "undefined references" to some static const member variables, but I did define them in the class body. What's wrong? If your class has a static data member: ```c++ // foo.h class Foo { ... static const int kBar = 100; }; ``` You also need to define it *outside* of the class body in `foo.cc`: ```c++ const int Foo::kBar; // No initializer here. ``` Otherwise your code is **invalid C++**, and may break in unexpected ways. In particular, using it in googletest comparison assertions (`EXPECT_EQ`, etc) will generate an "undefined reference" linker error. The fact that "it used to work" doesn't mean it's valid. It just means that you were lucky. :-) ## Can I derive a test fixture from another? Yes. Each test fixture has a corresponding and same named test suite. This means only one test suite can use a particular fixture. Sometimes, however, multiple test cases may want to use the same or slightly different fixtures. For example, you may want to make sure that all of a GUI library's test suites don't leak important system resources like fonts and brushes. In googletest, you share a fixture among test suites by putting the shared logic in a base test fixture, then deriving from that base a separate fixture for each test suite that wants to use this common logic. You then use `TEST_F()` to write tests using each derived fixture. Typically, your code looks like this: ```c++ // Defines a base test fixture. class BaseTest : public ::testing::Test { protected: ... }; // Derives a fixture FooTest from BaseTest. class FooTest : public BaseTest { protected: void SetUp() override { BaseTest::SetUp(); // Sets up the base fixture first. ... additional set-up work ... } void TearDown() override { ... clean-up work for FooTest ... BaseTest::TearDown(); // Remember to tear down the base fixture // after cleaning up FooTest! } ... functions and variables for FooTest ... }; // Tests that use the fixture FooTest. TEST_F(FooTest, Bar) { ... } TEST_F(FooTest, Baz) { ... } ... additional fixtures derived from BaseTest ... ``` If necessary, you can continue to derive test fixtures from a derived fixture. googletest has no limit on how deep the hierarchy can be. For a complete example using derived test fixtures, see [googletest sample](https://github.com/google/googletest/blob/master/googletest/samples/sample5_unittest.cc) ## My compiler complains "void value not ignored as it ought to be." What does this mean? You're probably using an `ASSERT_*()` in a function that doesn't return `void`. `ASSERT_*()` can only be used in `void` functions, due to exceptions being disabled by our build system. Please see more details [here](advanced.md#assertion-placement). ## My death test hangs (or seg-faults). How do I fix it? In googletest, death tests are run in a child process and the way they work is delicate. To write death tests you really need to understand how they work. Please make sure you have read [this](advanced.md#how-it-works). In particular, death tests don't like having multiple threads in the parent process. So the first thing you can try is to eliminate creating threads outside of `EXPECT_DEATH()`. For example, you may want to use mocks or fake objects instead of real ones in your tests. Sometimes this is impossible as some library you must use may be creating threads before `main()` is even reached. In this case, you can try to minimize the chance of conflicts by either moving as many activities as possible inside `EXPECT_DEATH()` (in the extreme case, you want to move everything inside), or leaving as few things as possible in it. Also, you can try to set the death test style to `"threadsafe"`, which is safer but slower, and see if it helps. If you go with thread-safe death tests, remember that they rerun the test program from the beginning in the child process. Therefore make sure your program can run side-by-side with itself and is deterministic. In the end, this boils down to good concurrent programming. You have to make sure that there is no race conditions or dead locks in your program. No silver bullet - sorry! ## Should I use the constructor/destructor of the test fixture or SetUp()/TearDown()? The first thing to remember is that googletest does **not** reuse the same test fixture object across multiple tests. For each `TEST_F`, googletest will create a **fresh** test fixture object, immediately call `SetUp()`, run the test body, call `TearDown()`, and then delete the test fixture object. When you need to write per-test set-up and tear-down logic, you have the choice between using the test fixture constructor/destructor or `SetUp()/TearDown()`. The former is usually preferred, as it has the following benefits: * By initializing a member variable in the constructor, we have the option to make it `const`, which helps prevent accidental changes to its value and makes the tests more obviously correct. * In case we need to subclass the test fixture class, the subclass' constructor is guaranteed to call the base class' constructor *first*, and the subclass' destructor is guaranteed to call the base class' destructor *afterward*. With `SetUp()/TearDown()`, a subclass may make the mistake of forgetting to call the base class' `SetUp()/TearDown()` or call them at the wrong time. You may still want to use `SetUp()/TearDown()` in the following cases: * C++ does not allow virtual function calls in constructors and destructors. You can call a method declared as virtual, but it will not use dynamic dispatch, it will use the definition from the class the constructor of which is currently executing. This is because calling a virtual method before the derived class constructor has a chance to run is very dangerous - the virtual method might operate on uninitialized data. Therefore, if you need to call a method that will be overridden in a derived class, you have to use `SetUp()/TearDown()`. * In the body of a constructor (or destructor), it's not possible to use the `ASSERT_xx` macros. Therefore, if the set-up operation could cause a fatal test failure that should prevent the test from running, it's necessary to use `abort` and abort the whole test executable, or to use `SetUp()` instead of a constructor. * If the tear-down operation could throw an exception, you must use `TearDown()` as opposed to the destructor, as throwing in a destructor leads to undefined behavior and usually will kill your program right away. Note that many standard libraries (like STL) may throw when exceptions are enabled in the compiler. Therefore you should prefer `TearDown()` if you want to write portable tests that work with or without exceptions. * The googletest team is considering making the assertion macros throw on platforms where exceptions are enabled (e.g. Windows, Mac OS, and Linux client-side), which will eliminate the need for the user to propagate failures from a subroutine to its caller. Therefore, you shouldn't use googletest assertions in a destructor if your code could run on such a platform. ## The compiler complains "no matching function to call" when I use ASSERT_PRED*. How do I fix it? If the predicate function you use in `ASSERT_PRED*` or `EXPECT_PRED*` is overloaded or a template, the compiler will have trouble figuring out which overloaded version it should use. `ASSERT_PRED_FORMAT*` and `EXPECT_PRED_FORMAT*` don't have this problem. If you see this error, you might want to switch to `(ASSERT|EXPECT)_PRED_FORMAT*`, which will also give you a better failure message. If, however, that is not an option, you can resolve the problem by explicitly telling the compiler which version to pick. For example, suppose you have ```c++ bool IsPositive(int n) { return n > 0; } bool IsPositive(double x) { return x > 0; } ``` you will get a compiler error if you write ```c++ EXPECT_PRED1(IsPositive, 5); ``` However, this will work: ```c++ EXPECT_PRED1(static_cast(IsPositive), 5); ``` (The stuff inside the angled brackets for the `static_cast` operator is the type of the function pointer for the `int`-version of `IsPositive()`.) As another example, when you have a template function ```c++ template bool IsNegative(T x) { return x < 0; } ``` you can use it in a predicate assertion like this: ```c++ ASSERT_PRED1(IsNegative, -5); ``` Things are more interesting if your template has more than one parameters. The following won't compile: ```c++ ASSERT_PRED2(GreaterThan, 5, 0); ``` as the C++ pre-processor thinks you are giving `ASSERT_PRED2` 4 arguments, which is one more than expected. The workaround is to wrap the predicate function in parentheses: ```c++ ASSERT_PRED2((GreaterThan), 5, 0); ``` ## My compiler complains about "ignoring return value" when I call RUN_ALL_TESTS(). Why? Some people had been ignoring the return value of `RUN_ALL_TESTS()`. That is, instead of ```c++ return RUN_ALL_TESTS(); ``` they write ```c++ RUN_ALL_TESTS(); ``` This is **wrong and dangerous**. The testing services needs to see the return value of `RUN_ALL_TESTS()` in order to determine if a test has passed. If your `main()` function ignores it, your test will be considered successful even if it has a googletest assertion failure. Very bad. We have decided to fix this (thanks to Michael Chastain for the idea). Now, your code will no longer be able to ignore `RUN_ALL_TESTS()` when compiled with `gcc`. If you do so, you'll get a compiler error. If you see the compiler complaining about you ignoring the return value of `RUN_ALL_TESTS()`, the fix is simple: just make sure its value is used as the return value of `main()`. But how could we introduce a change that breaks existing tests? Well, in this case, the code was already broken in the first place, so we didn't break it. :-) ## My compiler complains that a constructor (or destructor) cannot return a value. What's going on? Due to a peculiarity of C++, in order to support the syntax for streaming messages to an `ASSERT_*`, e.g. ```c++ ASSERT_EQ(1, Foo()) << "blah blah" << foo; ``` we had to give up using `ASSERT*` and `FAIL*` (but not `EXPECT*` and `ADD_FAILURE*`) in constructors and destructors. The workaround is to move the content of your constructor/destructor to a private void member function, or switch to `EXPECT_*()` if that works. This [section](advanced.md#assertion-placement) in the user's guide explains it. ## My SetUp() function is not called. Why? C++ is case-sensitive. Did you spell it as `Setup()`? Similarly, sometimes people spell `SetUpTestSuite()` as `SetupTestSuite()` and wonder why it's never called. ## I have several test suites which share the same test fixture logic, do I have to define a new test fixture class for each of them? This seems pretty tedious. You don't have to. Instead of ```c++ class FooTest : public BaseTest {}; TEST_F(FooTest, Abc) { ... } TEST_F(FooTest, Def) { ... } class BarTest : public BaseTest {}; TEST_F(BarTest, Abc) { ... } TEST_F(BarTest, Def) { ... } ``` you can simply `typedef` the test fixtures: ```c++ typedef BaseTest FooTest; TEST_F(FooTest, Abc) { ... } TEST_F(FooTest, Def) { ... } typedef BaseTest BarTest; TEST_F(BarTest, Abc) { ... } TEST_F(BarTest, Def) { ... } ``` ## googletest output is buried in a whole bunch of LOG messages. What do I do? The googletest output is meant to be a concise and human-friendly report. If your test generates textual output itself, it will mix with the googletest output, making it hard to read. However, there is an easy solution to this problem. Since `LOG` messages go to stderr, we decided to let googletest output go to stdout. This way, you can easily separate the two using redirection. For example: ```shell $ ./my_test > gtest_output.txt ``` ## Why should I prefer test fixtures over global variables? There are several good reasons: 1. It's likely your test needs to change the states of its global variables. This makes it difficult to keep side effects from escaping one test and contaminating others, making debugging difficult. By using fixtures, each test has a fresh set of variables that's different (but with the same names). Thus, tests are kept independent of each other. -1. Global variables pollute the global namespace. -1. Test fixtures can be reused via subclassing, which cannot be done easily +2. Global variables pollute the global namespace. +3. Test fixtures can be reused via subclassing, which cannot be done easily with global variables. This is useful if many test suites have something in common. ## What can the statement argument in ASSERT_DEATH() be? `ASSERT_DEATH(*statement*, *regex*)` (or any death assertion macro) can be used wherever `*statement*` is valid. So basically `*statement*` can be any C++ statement that makes sense in the current context. In particular, it can reference global and/or local variables, and can be: * a simple function call (often the case), * a complex expression, or * a compound statement. Some examples are shown here: ```c++ // A death test can be a simple function call. TEST(MyDeathTest, FunctionCall) { ASSERT_DEATH(Xyz(5), "Xyz failed"); } // Or a complex expression that references variables and functions. TEST(MyDeathTest, ComplexExpression) { const bool c = Condition(); ASSERT_DEATH((c ? Func1(0) : object2.Method("test")), "(Func1|Method) failed"); } // Death assertions can be used any where in a function. In // particular, they can be inside a loop. TEST(MyDeathTest, InsideLoop) { // Verifies that Foo(0), Foo(1), ..., and Foo(4) all die. for (int i = 0; i < 5; i++) { EXPECT_DEATH_M(Foo(i), "Foo has \\d+ errors", ::testing::Message() << "where i is " << i); } } // A death assertion can contain a compound statement. TEST(MyDeathTest, CompoundStatement) { // Verifies that at lease one of Bar(0), Bar(1), ..., and // Bar(4) dies. ASSERT_DEATH({ for (int i = 0; i < 5; i++) { Bar(i); } }, "Bar has \\d+ errors"); } ``` gtest-death-test_test.cc contains more examples if you are interested. ## I have a fixture class `FooTest`, but `TEST_F(FooTest, Bar)` gives me error ``"no matching function for call to `FooTest::FooTest()'"``. Why? Googletest needs to be able to create objects of your test fixture class, so it must have a default constructor. Normally the compiler will define one for you. However, there are cases where you have to define your own: * If you explicitly declare a non-default constructor for class `FooTest` (`DISALLOW_EVIL_CONSTRUCTORS()` does this), then you need to define a default constructor, even if it would be empty. * If `FooTest` has a const non-static data member, then you have to define the default constructor *and* initialize the const member in the initializer list of the constructor. (Early versions of `gcc` doesn't force you to initialize the const member. It's a bug that has been fixed in `gcc 4`.) ## Why does ASSERT_DEATH complain about previous threads that were already joined? With the Linux pthread library, there is no turning back once you cross the line from single thread to multiple threads. The first time you create a thread, a manager thread is created in addition, so you get 3, not 2, threads. Later when the thread you create joins the main thread, the thread count decrements by 1, but the manager thread will never be killed, so you still have 2 threads, which means you cannot safely run a death test. The new NPTL thread library doesn't suffer from this problem, as it doesn't create a manager thread. However, if you don't control which machine your test runs on, you shouldn't depend on this. ## Why does googletest require the entire test suite, instead of individual tests, to be named *DeathTest when it uses ASSERT_DEATH? googletest does not interleave tests from different test suites. That is, it runs all tests in one test suite first, and then runs all tests in the next test suite, and so on. googletest does this because it needs to set up a test suite before the first test in it is run, and tear it down afterwords. Splitting up the test case would require multiple set-up and tear-down processes, which is inefficient and makes the semantics unclean. If we were to determine the order of tests based on test name instead of test case name, then we would have a problem with the following situation: ```c++ TEST_F(FooTest, AbcDeathTest) { ... } TEST_F(FooTest, Uvw) { ... } TEST_F(BarTest, DefDeathTest) { ... } TEST_F(BarTest, Xyz) { ... } ``` Since `FooTest.AbcDeathTest` needs to run before `BarTest.Xyz`, and we don't interleave tests from different test suites, we need to run all tests in the `FooTest` case before running any test in the `BarTest` case. This contradicts with the requirement to run `BarTest.DefDeathTest` before `FooTest.Uvw`. ## But I don't like calling my entire test suite \*DeathTest when it contains both death tests and non-death tests. What do I do? You don't have to, but if you like, you may split up the test suite into `FooTest` and `FooDeathTest`, where the names make it clear that they are related: ```c++ class FooTest : public ::testing::Test { ... }; TEST_F(FooTest, Abc) { ... } TEST_F(FooTest, Def) { ... } using FooDeathTest = FooTest; TEST_F(FooDeathTest, Uvw) { ... EXPECT_DEATH(...) ... } TEST_F(FooDeathTest, Xyz) { ... ASSERT_DEATH(...) ... } ``` ## googletest prints the LOG messages in a death test's child process only when the test fails. How can I see the LOG messages when the death test succeeds? Printing the LOG messages generated by the statement inside `EXPECT_DEATH()` makes it harder to search for real problems in the parent's log. Therefore, googletest only prints them when the death test has failed. If you really need to see such LOG messages, a workaround is to temporarily break the death test (e.g. by changing the regex pattern it is expected to match). Admittedly, this is a hack. We'll consider a more permanent solution after the fork-and-exec-style death tests are implemented. ## The compiler complains about "no match for 'operator<<'" when I use an assertion. What gives? If you use a user-defined type `FooType` in an assertion, you must make sure there is an `std::ostream& operator<<(std::ostream&, const FooType&)` function defined such that we can print a value of `FooType`. In addition, if `FooType` is declared in a name space, the `<<` operator also needs to be defined in the *same* name space. See https://abseil.io/tips/49 for details. ## How do I suppress the memory leak messages on Windows? Since the statically initialized googletest singleton requires allocations on the heap, the Visual C++ memory leak detector will report memory leaks at the end of the program run. The easiest way to avoid this is to use the `_CrtMemCheckpoint` and `_CrtMemDumpAllObjectsSince` calls to not report any statically initialized heap objects. See MSDN for more details and additional heap check/debug routines. ## How can my code detect if it is running in a test? If you write code that sniffs whether it's running in a test and does different things accordingly, you are leaking test-only logic into production code and there is no easy way to ensure that the test-only code paths aren't run by mistake in production. Such cleverness also leads to [Heisenbugs](https://en.wikipedia.org/wiki/Heisenbug). Therefore we strongly advise against the practice, and googletest doesn't provide a way to do it. In general, the recommended way to cause the code to behave differently under test is [Dependency Injection](https://en.wikipedia.org/wiki/Dependency_injection). You can inject different functionality from the test and from the production code. Since your production code doesn't link in the for-test logic at all (the [`testonly`](https://docs.bazel.build/versions/master/be/common-definitions.html#common.testonly) attribute for BUILD targets helps to ensure that), there is no danger in accidentally running it. However, if you *really*, *really*, *really* have no choice, and if you follow the rule of ending your test program names with `_test`, you can use the *horrible* hack of sniffing your executable name (`argv[0]` in `main()`) to know whether the code is under test. ## How do I temporarily disable a test? If you have a broken test that you cannot fix right away, you can add the DISABLED_ prefix to its name. This will exclude it from execution. This is better than commenting out the code or using #if 0, as disabled tests are still compiled (and thus won't rot). To include disabled tests in test execution, just invoke the test program with the --gtest_also_run_disabled_tests flag. ## Is it OK if I have two separate `TEST(Foo, Bar)` test methods defined in different namespaces? Yes. The rule is **all test methods in the same test suite must use the same fixture class.** This means that the following is **allowed** because both tests use the same fixture class (`::testing::Test`). ```c++ namespace foo { TEST(CoolTest, DoSomething) { SUCCEED(); } } // namespace foo namespace bar { TEST(CoolTest, DoSomething) { SUCCEED(); } } // namespace bar ``` However, the following code is **not allowed** and will produce a runtime error from googletest because the test methods are using different test fixture classes with the same test suite name. ```c++ namespace foo { class CoolTest : public ::testing::Test {}; // Fixture foo::CoolTest TEST_F(CoolTest, DoSomething) { SUCCEED(); } } // namespace foo namespace bar { class CoolTest : public ::testing::Test {}; // Fixture: bar::CoolTest TEST_F(CoolTest, DoSomething) { SUCCEED(); } } // namespace bar ``` diff --git a/googletest/docs/primer.md b/googletest/docs/primer.md index 388df3b5..e441ceb8 100644 --- a/googletest/docs/primer.md +++ b/googletest/docs/primer.md @@ -1,565 +1,565 @@ # Googletest Primer ## Introduction: Why googletest? *googletest* helps you write better C++ tests. googletest is a testing framework developed by the Testing Technology team with Google's specific requirements and constraints in mind. No matter whether you work on Linux, Windows, or a Mac, if you write C++ code, googletest can help you. And it supports *any* kind of tests, not just unit tests. So what makes a good test, and how does googletest fit in? We believe: 1. Tests should be *independent* and *repeatable*. It's a pain to debug a test that succeeds or fails as a result of other tests. googletest isolates the tests by running each of them on a different object. When a test fails, googletest allows you to run it in isolation for quick debugging. -1. Tests should be well *organized* and reflect the structure of the tested +2. Tests should be well *organized* and reflect the structure of the tested code. googletest groups related tests into test suites that can share data and subroutines. This common pattern is easy to recognize and makes tests easy to maintain. Such consistency is especially helpful when people switch projects and start to work on a new code base. -1. Tests should be *portable* and *reusable*. Google has a lot of code that is +3. Tests should be *portable* and *reusable*. Google has a lot of code that is platform-neutral, its tests should also be platform-neutral. googletest works on different OSes, with different compilers, with or without exceptions, so googletest tests can work with a variety of configurations. -1. When tests fail, they should provide as much *information* about the problem +4. When tests fail, they should provide as much *information* about the problem as possible. googletest doesn't stop at the first test failure. Instead, it only stops the current test and continues with the next. You can also set up tests that report non-fatal failures after which the current test continues. Thus, you can detect and fix multiple bugs in a single run-edit-compile cycle. -1. The testing framework should liberate test writers from housekeeping chores +5. The testing framework should liberate test writers from housekeeping chores and let them focus on the test *content*. googletest automatically keeps track of all tests defined, and doesn't require the user to enumerate them in order to run them. -1. Tests should be *fast*. With googletest, you can reuse shared resources +6. Tests should be *fast*. With googletest, you can reuse shared resources across tests and pay for the set-up/tear-down only once, without making tests depend on each other. Since googletest is based on the popular xUnit architecture, you'll feel right at home if you've used JUnit or PyUnit before. If not, it will take you about 10 minutes to learn the basics and get started. So let's go! ## Beware of the nomenclature _Note:_ There might be some confusion of idea due to different definitions of the terms _Test_, _Test Case_ and _Test Suite_, so beware of misunderstanding these. Historically, googletest started to use the term _Test Case_ for grouping related tests, whereas current publications including the International Software Testing Qualifications Board ([ISTQB](http://www.istqb.org/)) and various textbooks on Software Quality use the term _[Test Suite](http://glossary.istqb.org/search/test%20suite)_ for this. The related term _Test_, as it is used in the googletest, is corresponding to the term _[Test Case](http://glossary.istqb.org/search/test%20case)_ of ISTQB and others. The term _Test_ is commonly of broad enough sense, including ISTQB's definition of _Test Case_, so it's not much of a problem here. But the term _Test Case_ as was used in Google Test is of contradictory sense and thus confusing. googletest recently started replacing the term _Test Case_ by _Test Suite_ The preferred API is TestSuite*. The older TestCase* API is being slowly deprecated and refactored away So please be aware of the different definitions of the terms: Meaning | googletest Term | [ISTQB](http://www.istqb.org/) Term :----------------------------------------------------------------------------------- | :---------------------- | :---------------------------------- Exercise a particular program path with specific input values and verify the results | [TEST()](#simple-tests) | [Test Case](http://glossary.istqb.org/search/test%20case) ## Basic Concepts When using googletest, you start by writing *assertions*, which are statements that check whether a condition is true. An assertion's result can be *success*, *nonfatal failure*, or *fatal failure*. If a fatal failure occurs, it aborts the current function; otherwise the program continues normally. *Tests* use assertions to verify the tested code's behavior. If a test crashes or has a failed assertion, then it *fails*; otherwise it *succeeds*. A *test suite* contains one or many tests. You should group your tests into test suites that reflect the structure of the tested code. When multiple tests in a test suite need to share common objects and subroutines, you can put them into a *test fixture* class. A *test program* can contain multiple test suites. We'll now explain how to write a test program, starting at the individual assertion level and building up to tests and test suites. ## Assertions googletest assertions are macros that resemble function calls. You test a class or function by making assertions about its behavior. When an assertion fails, googletest prints the assertion's source file and line number location, along with a failure message. You may also supply a custom failure message which will be appended to googletest's message. The assertions come in pairs that test the same thing but have different effects on the current function. `ASSERT_*` versions generate fatal failures when they fail, and **abort the current function**. `EXPECT_*` versions generate nonfatal failures, which don't abort the current function. Usually `EXPECT_*` are preferred, as they allow more than one failure to be reported in a test. However, you should use `ASSERT_*` if it doesn't make sense to continue when the assertion in question fails. Since a failed `ASSERT_*` returns from the current function immediately, possibly skipping clean-up code that comes after it, it may cause a space leak. Depending on the nature of the leak, it may or may not be worth fixing - so keep this in mind if you get a heap checker error in addition to assertion errors. To provide a custom failure message, simply stream it into the macro using the `<<` operator, or a sequence of such operators. An example: ```c++ ASSERT_EQ(x.size(), y.size()) << "Vectors x and y are of unequal length"; for (int i = 0; i < x.size(); ++i) { EXPECT_EQ(x[i], y[i]) << "Vectors x and y differ at index " << i; } ``` Anything that can be streamed to an `ostream` can be streamed to an assertion macro--in particular, C strings and `string` objects. If a wide string (`wchar_t*`, `TCHAR*` in `UNICODE` mode on Windows, or `std::wstring`) is streamed to an assertion, it will be translated to UTF-8 when printed. ### Basic Assertions These assertions do basic true/false condition testing. Fatal assertion | Nonfatal assertion | Verifies -------------------------- | -------------------------- | -------------------- `ASSERT_TRUE(condition);` | `EXPECT_TRUE(condition);` | `condition` is true `ASSERT_FALSE(condition);` | `EXPECT_FALSE(condition);` | `condition` is false Remember, when they fail, `ASSERT_*` yields a fatal failure and returns from the current function, while `EXPECT_*` yields a nonfatal failure, allowing the function to continue running. In either case, an assertion failure means its containing test fails. **Availability**: Linux, Windows, Mac. ### Binary Comparison This section describes assertions that compare two values. Fatal assertion | Nonfatal assertion | Verifies ------------------------ | ------------------------ | -------------- `ASSERT_EQ(val1, val2);` | `EXPECT_EQ(val1, val2);` | `val1 == val2` `ASSERT_NE(val1, val2);` | `EXPECT_NE(val1, val2);` | `val1 != val2` `ASSERT_LT(val1, val2);` | `EXPECT_LT(val1, val2);` | `val1 < val2` `ASSERT_LE(val1, val2);` | `EXPECT_LE(val1, val2);` | `val1 <= val2` `ASSERT_GT(val1, val2);` | `EXPECT_GT(val1, val2);` | `val1 > val2` `ASSERT_GE(val1, val2);` | `EXPECT_GE(val1, val2);` | `val1 >= val2` Value arguments must be comparable by the assertion's comparison operator or you'll get a compiler error. We used to require the arguments to support the `<<` operator for streaming to an `ostream`, but it's no longer necessary. If `<<` is supported, it will be called to print the arguments when the assertion fails; otherwise googletest will attempt to print them in the best way it can. For more details and how to customize the printing of the arguments, see [documentation](../../googlemock/docs/cook_book.md#teaching-gmock-how-to-print-your-values) These assertions can work with a user-defined type, but only if you define the corresponding comparison operator (e.g. `==`, `<`, etc). Since this is discouraged by the Google [C++ Style Guide](https://google.github.io/styleguide/cppguide.html#Operator_Overloading), you may need to use `ASSERT_TRUE()` or `EXPECT_TRUE()` to assert the equality of two objects of a user-defined type. However, when possible, `ASSERT_EQ(actual, expected)` is preferred to `ASSERT_TRUE(actual == expected)`, since it tells you `actual` and `expected`'s values on failure. Arguments are always evaluated exactly once. Therefore, it's OK for the arguments to have side effects. However, as with any ordinary C/C++ function, the arguments' evaluation order is undefined (i.e. the compiler is free to choose any order) and your code should not depend on any particular argument evaluation order. `ASSERT_EQ()` does pointer equality on pointers. If used on two C strings, it tests if they are in the same memory location, not if they have the same value. Therefore, if you want to compare C strings (e.g. `const char*`) by value, use `ASSERT_STREQ()`, which will be described later on. In particular, to assert that a C string is `NULL`, use `ASSERT_STREQ(c_string, NULL)`. Consider using `ASSERT_EQ(c_string, nullptr)` if c++11 is supported. To compare two `string` objects, you should use `ASSERT_EQ`. When doing pointer comparisons use `*_EQ(ptr, nullptr)` and `*_NE(ptr, nullptr)` instead of `*_EQ(ptr, NULL)` and `*_NE(ptr, NULL)`. This is because `nullptr` is typed while `NULL` is not. See [FAQ](faq.md)for more details. If you're working with floating point numbers, you may want to use the floating point variations of some of these macros in order to avoid problems caused by rounding. See [Advanced googletest Topics](advanced.md) for details. Macros in this section work with both narrow and wide string objects (`string` and `wstring`). **Availability**: Linux, Windows, Mac. **Historical note**: Before February 2016 `*_EQ` had a convention of calling it as `ASSERT_EQ(expected, actual)`, so lots of existing code uses this order. Now `*_EQ` treats both parameters in the same way. ### String Comparison The assertions in this group compare two **C strings**. If you want to compare two `string` objects, use `EXPECT_EQ`, `EXPECT_NE`, and etc instead. | Fatal assertion | Nonfatal assertion | Verifies | | ----------------------- | ----------------------- | ---------------------- | | `ASSERT_STREQ(str1, | `EXPECT_STREQ(str1, | the two C strings have | : str2);` : str2);` : the same content : | `ASSERT_STRNE(str1, | `EXPECT_STRNE(str1, | the two C strings have | : str2);` : str2);` : different contents : | `ASSERT_STRCASEEQ(str1, | `EXPECT_STRCASEEQ(str1, | the two C strings have | : str2);` : str2);` : the same content, : : : : ignoring case : | `ASSERT_STRCASENE(str1, | `EXPECT_STRCASENE(str1, | the two C strings have | : str2);` : str2);` : different contents, : : : : ignoring case : Note that "CASE" in an assertion name means that case is ignored. A `NULL` pointer and an empty string are considered *different*. `*STREQ*` and `*STRNE*` also accept wide C strings (`wchar_t*`). If a comparison of two wide strings fails, their values will be printed as UTF-8 narrow strings. **Availability**: Linux, Windows, Mac. **See also**: For more string comparison tricks (substring, prefix, suffix, and regular expression matching, for example), see [this](https://github.com/google/googletest/blob/master/googletest/docs/advanced.md) in the Advanced googletest Guide. ## Simple Tests To create a test: 1. Use the `TEST()` macro to define and name a test function, These are ordinary C++ functions that don't return a value. -1. In this function, along with any valid C++ statements you want to include, +2. In this function, along with any valid C++ statements you want to include, use the various googletest assertions to check values. -1. The test's result is determined by the assertions; if any assertion in the +3. The test's result is determined by the assertions; if any assertion in the test fails (either fatally or non-fatally), or if the test crashes, the entire test fails. Otherwise, it succeeds. ```c++ TEST(TestSuiteName, TestName) { ... test body ... } ``` `TEST()` arguments go from general to specific. The *first* argument is the name of the test suite, and the *second* argument is the test's name within the test case. Both names must be valid C++ identifiers, and they should not contain underscore (`_`). A test's *full name* consists of its containing test suite and its individual name. Tests from different test suites can have the same individual name. For example, let's take a simple integer function: ```c++ int Factorial(int n); // Returns the factorial of n ``` A test suite for this function might look like: ```c++ // Tests factorial of 0. TEST(FactorialTest, HandlesZeroInput) { EXPECT_EQ(Factorial(0), 1); } // Tests factorial of positive numbers. TEST(FactorialTest, HandlesPositiveInput) { EXPECT_EQ(Factorial(1), 1); EXPECT_EQ(Factorial(2), 2); EXPECT_EQ(Factorial(3), 6); EXPECT_EQ(Factorial(8), 40320); } ``` googletest groups the test results by test suites, so logically-related tests should be in the same test suite; in other words, the first argument to their `TEST()` should be the same. In the above example, we have two tests, `HandlesZeroInput` and `HandlesPositiveInput`, that belong to the same test suite `FactorialTest`. When naming your test suites and tests, you should follow the same convention as for [naming functions and classes](https://google.github.io/styleguide/cppguide.html#Function_Names). **Availability**: Linux, Windows, Mac. ## Test Fixtures: Using the Same Data Configuration for Multiple Tests If you find yourself writing two or more tests that operate on similar data, you can use a *test fixture*. It allows you to reuse the same configuration of objects for several different tests. To create a fixture: 1. Derive a class from `::testing::Test` . Start its body with `protected:` as we'll want to access fixture members from sub-classes. -1. Inside the class, declare any objects you plan to use. -1. If necessary, write a default constructor or `SetUp()` function to prepare +2. Inside the class, declare any objects you plan to use. +3. If necessary, write a default constructor or `SetUp()` function to prepare the objects for each test. A common mistake is to spell `SetUp()` as **`Setup()`** with a small `u` - Use `override` in C++11 to make sure you spelled it correctly -1. If necessary, write a destructor or `TearDown()` function to release any +4. If necessary, write a destructor or `TearDown()` function to release any resources you allocated in `SetUp()` . To learn when you should use the constructor/destructor and when you should use `SetUp()/TearDown()`, read the [FAQ](faq.md). -1. If needed, define subroutines for your tests to share. +5. If needed, define subroutines for your tests to share. When using a fixture, use `TEST_F()` instead of `TEST()` as it allows you to access objects and subroutines in the test fixture: ```c++ TEST_F(TestFixtureName, TestName) { ... test body ... } ``` Like `TEST()`, the first argument is the test suite name, but for `TEST_F()` this must be the name of the test fixture class. You've probably guessed: `_F` is for fixture. Unfortunately, the C++ macro system does not allow us to create a single macro that can handle both types of tests. Using the wrong macro causes a compiler error. Also, you must first define a test fixture class before using it in a `TEST_F()`, or you'll get the compiler error "`virtual outside class declaration`". For each test defined with `TEST_F()` , googletest will create a *fresh* test fixture at runtime, immediately initialize it via `SetUp()` , run the test, clean up by calling `TearDown()` , and then delete the test fixture. Note that different tests in the same test suite have different test fixture objects, and googletest always deletes a test fixture before it creates the next one. googletest does **not** reuse the same test fixture for multiple tests. Any changes one test makes to the fixture do not affect other tests. As an example, let's write tests for a FIFO queue class named `Queue`, which has the following interface: ```c++ template // E is the element type. class Queue { public: Queue(); void Enqueue(const E& element); E* Dequeue(); // Returns NULL if the queue is empty. size_t size() const; ... }; ``` First, define a fixture class. By convention, you should give it the name `FooTest` where `Foo` is the class being tested. ```c++ class QueueTest : public ::testing::Test { protected: void SetUp() override { q1_.Enqueue(1); q2_.Enqueue(2); q2_.Enqueue(3); } // void TearDown() override {} Queue q0_; Queue q1_; Queue q2_; }; ``` In this case, `TearDown()` is not needed since we don't have to clean up after each test, other than what's already done by the destructor. Now we'll write tests using `TEST_F()` and this fixture. ```c++ TEST_F(QueueTest, IsEmptyInitially) { EXPECT_EQ(q0_.size(), 0); } TEST_F(QueueTest, DequeueWorks) { int* n = q0_.Dequeue(); EXPECT_EQ(n, nullptr); n = q1_.Dequeue(); ASSERT_NE(n, nullptr); EXPECT_EQ(*n, 1); EXPECT_EQ(q1_.size(), 0); delete n; n = q2_.Dequeue(); ASSERT_NE(n, nullptr); EXPECT_EQ(*n, 2); EXPECT_EQ(q2_.size(), 1); delete n; } ``` The above uses both `ASSERT_*` and `EXPECT_*` assertions. The rule of thumb is to use `EXPECT_*` when you want the test to continue to reveal more errors after the assertion failure, and use `ASSERT_*` when continuing after failure doesn't make sense. For example, the second assertion in the `Dequeue` test is `ASSERT_NE(nullptr, n)`, as we need to dereference the pointer `n` later, which would lead to a segfault when `n` is `NULL`. When these tests run, the following happens: 1. googletest constructs a `QueueTest` object (let's call it `t1` ). -1. `t1.SetUp()` initializes `t1` . -1. The first test ( `IsEmptyInitially` ) runs on `t1` . -1. `t1.TearDown()` cleans up after the test finishes. -1. `t1` is destructed. -1. The above steps are repeated on another `QueueTest` object, this time +2. `t1.SetUp()` initializes `t1` . +3. The first test ( `IsEmptyInitially` ) runs on `t1` . +4. `t1.TearDown()` cleans up after the test finishes. +5. `t1` is destructed. +6. The above steps are repeated on another `QueueTest` object, this time running the `DequeueWorks` test. **Availability**: Linux, Windows, Mac. ## Invoking the Tests `TEST()` and `TEST_F()` implicitly register their tests with googletest. So, unlike with many other C++ testing frameworks, you don't have to re-list all your defined tests in order to run them. After defining your tests, you can run them with `RUN_ALL_TESTS()` , which returns `0` if all the tests are successful, or `1` otherwise. Note that `RUN_ALL_TESTS()` runs *all tests* in your link unit -- they can be from different test suites, or even different source files. When invoked, the `RUN_ALL_TESTS()` macro: * Saves the state of all googletest flags * Creates a test fixture object for the first test. * Initializes it via `SetUp()`. * Runs the test on the fixture object. * Cleans up the fixture via `TearDown()`. * Deletes the fixture. * Restores the state of all all googletest flags * Repeats the above steps for the next test, until all tests have run. If a fatal failure happens the subsequent steps will be skipped. > IMPORTANT: You must **not** ignore the return value of `RUN_ALL_TESTS()`, or > you will get a compiler error. The rationale for this design is that the > automated testing service determines whether a test has passed based on its > exit code, not on its stdout/stderr output; thus your `main()` function must > return the value of `RUN_ALL_TESTS()`. > > Also, you should call `RUN_ALL_TESTS()` only **once**. Calling it more than > once conflicts with some advanced googletest features (e.g. thread-safe > [death tests](advanced.md#death-tests)) and thus is not supported. **Availability**: Linux, Windows, Mac. ## Writing the main() Function Write your own main() function, which should return the value of `RUN_ALL_TESTS()` You can start from this boilerplate: ```c++ #include "this/package/foo.h" #include "gtest/gtest.h" namespace { // The fixture for testing class Foo. class FooTest : public ::testing::Test { protected: // You can remove any or all of the following functions if its body // is empty. FooTest() { // You can do set-up work for each test here. } ~FooTest() override { // You can do clean-up work that doesn't throw exceptions here. } // If the constructor and destructor are not enough for setting up // and cleaning up each test, you can define the following methods: void SetUp() override { // Code here will be called immediately after the constructor (right // before each test). } void TearDown() override { // Code here will be called immediately after each test (right // before the destructor). } // Objects declared here can be used by all tests in the test suite for Foo. }; // Tests that the Foo::Bar() method does Abc. TEST_F(FooTest, MethodBarDoesAbc) { const std::string input_filepath = "this/package/testdata/myinputfile.dat"; const std::string output_filepath = "this/package/testdata/myoutputfile.dat"; Foo f; EXPECT_EQ(f.Bar(input_filepath, output_filepath), 0); } // Tests that Foo does Xyz. TEST_F(FooTest, DoesXyz) { // Exercises the Xyz feature of Foo. } } // namespace int main(int argc, char **argv) { ::testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } ``` The `::testing::InitGoogleTest()` function parses the command line for googletest flags, and removes all recognized flags. This allows the user to control a test program's behavior via various flags, which we'll cover in [AdvancedGuide](advanced.md). You **must** call this function before calling `RUN_ALL_TESTS()`, or the flags won't be properly initialized. On Windows, `InitGoogleTest()` also works with wide strings, so it can be used in programs compiled in `UNICODE` mode as well. But maybe you think that writing all those main() functions is too much work? We agree with you completely and that's why Google Test provides a basic implementation of main(). If it fits your needs, then just link your test with gtest\_main library and you are good to go. NOTE: `ParseGUnitFlags()` is deprecated in favor of `InitGoogleTest()`. ## Known Limitations * Google Test is designed to be thread-safe. The implementation is thread-safe on systems where the `pthreads` library is available. It is currently _unsafe_ to use Google Test assertions from two threads concurrently on other systems (e.g. Windows). In most tests this is not an issue as usually the assertions are done in the main thread. If you want to help, you can volunteer to implement the necessary synchronization primitives in `gtest-port.h` for your platform.