1. Discovering BDD
2. Your first Scenario
The video and audio assets for this chapter are here.
2.1. An introduction to Shouty
Shouty is a social network that allows people who are physically close to communicate, just like people have always communicated with their voices. In the real world you can talk to someone in the same room, or across the street. Or even 100 m away from you in a park - if you shout.
That’s Shouty. What you say on this social network can only be “heard” by people who are nearby.
2.2. Choose the first scenario
Let’s start with a very basic example of Shouty’s behaviour. Something we might have discussed in a three amigos meeting:
Sean the shouter shouts "free bagels at Sean’s" and Lucy the listener who happens to be stood across the street from his store, 15 metres away, hears him. She walks into Sean’s Coffee and takes advantage of the offer.
We can translate this into a Gherkin scenario so that Cucumber can run it. Here’s how that would look.
Scenario: Listener is within range
Given Lucy is located 15m from Sean
When Sean shouts "free bagels at Sean's"
Then Lucy hears Sean’s message
You can see there are four special keywords being used here. Scenario
just tells Cucumber we’re about to describe an example that it can execute. Then you see the lines beginning with Given, When and Then.
Given
is the context for the scenario. We’re putting the system into a specific state, ready for the scenario to unfold.
When
is an action. Something that happens to the system that will cause something else to happen: an outcome.
Then
is the outcome. It’s the behaviour we expect from the system when this action happens in this context.
You’ll notice we’ve omitted from our outcome anything about Lucy walking into Sean’s store and making a purchase. Remember, Gherkin is supposed to describe the behaviour of the system, so it would be distracting to have it in our scenario.
Each scenario has these three ingredients: a context, an action, and one or more outcomes.
Together, they describe one single aspect of the behaviour of the system. An example.
Now that we’ve translated our example into Gherkin, we can automate it!
2.2.1. Lesson 2 - Questions
What’s an advantage of using Gherkin to express our examples in BDD? (choose one) ::
-
We can get Cucumber to test whether the code does what the scenario describes.
-
We can easily automate tests even if we don’t know much about programming.
-
We can use tools to generate the scenarios.
Explanation: Gherkin is just one way of expressing examples of how you want your system to behave. The advantage of using this particular format is that you can use Cucumber to test them for you, making them into Living Documentation.
Which of these are Gherkin keywords? (choose multiple)::
-
Scenario
-
Story
-
Given
-
Only
-
If
-
When
-
Before
-
Then
-
While
-
Check
Explanation:
We’ve introduced four Gherkin keywords so far:
* Scenario
tells Cucumber we’re about to describe an example that it can execute.
* Given
, When
and Then
identify the steps of the scenario.
There are a few other keywords which will be introduced later in the course.
The Gherkin keywords Given, When and Then, allow us to express three different components of a scenario. Which of these statements correctly describes how each of these keywords should be used? (Choose multiple)::
-
Given describes something that has already happened before the interesting part of the scenario starts. (Correct)
-
Then describes an action you want to take.
-
When explains what should happen at the end of the scenario.
-
Then explains what should happen at the end of the scenario. (Correct)
-
When expresses an action that changes the state of the system. (Correct)
-
Given describes the context in which the scenario occurs. (Correct)
-
Explanation:
-
Given is the context for the scenario. We’re putting the system into a specific state, ready for the scenario to unfold.
-
When is an action. Something that happens to the system that will cause something else to happen: an outcome.
-
Then is the outcome. It’s the behaviour we expect from the system when this action happens in this context.
Explanation: Given is the context for the scenario. We’re putting the system into a specific state, ready for the scenario to unfold.
When is an action. Something that happens to the system that will cause something else to happen: an outcome.
Then is the outcome. It’s the behaviour we expect from the system when this action happens in this context.
Why did our scenario not mention anything about Lucy walking into Sean’s store and making a purchase?
-
It’s a business goal which does not belong in a Gherkin document.
-
As BDD practitioners, we’re focussed on the behaviour of the system, so we don’t care about the people who use the software.
-
Including details about these two people would be distracting from the main point of our scenario.
-
Executable scenarios need to stay focussed on the behaviour of the system itself. We can document business goals elsewhere in our Gherkin to provide context. - TRUE
Explanation: Behaviour-Driven Development practitioners definitely do care about business goals, but when we’re writing the Scenario part of our Gherkin, we need to focus on the observable, testable behaviour of the system we’re building.
Later in the course we’ll show you how you can use other parts of Gherkin documents to add other relevant details, like business goals, to make great executable specifications.
2.3. Install SpecFlow
Hello! I’m Gaspar Nagy, the creator of SpecFlow and a BDD trainer nowadays. I will guide you through the SpecFlow automation topics in Cucumber School. First, let’s install SpecFlow!
SpecFlow is an open-source tool that is available as a NuGet package that you need to configure for your project. Although SpecFlow works fine even without Visual Studio, in Cucumber School we are going to use Visual Studio 2019, because it integrates nicely with SpecFlow. If you don’t have Visual Studio, you can download the Visual Studio Community edition that is free for education purposes and small teams.
In order to use the Visual Studio integration of SpecFlow, you need to install a Visual Studio extension. This is something you need to do only once. The Visual Studio extensions can be managed by opening the Manage Extensions command from the Extensions menu.
There are plenty of useful extensions in the Visual Studio marketplace and for SpecFlow there are two that you can choose from. You will find these if you type SpecFlow into the search box. Both the SpecFlow for Visual Studio 2019 and the Deveroom for SpecFlow extensions work well with SpecFlow and both of them are free and open-source. In this course I will use the Deveroom extension, but you can follow the exercises with the other one as well.
In order to install the extension you just need to click on the Download button next to the name of the extension you selected. The extension is downloaded, but it only gets installed once you close your Visual Studio. So we need to close all instances of Visual Studio 2019 and wait for the install dialog to pop up.
Here it is. Accept the installation of the extension by clicking on the Modify button , that completes the setup process. Our Visual Studio is now ready to work with SpecFlow. So are we.
Now we are going to create a Visual Studio solution for the Shouty application. As we will focus on the business logic of the application in this course, I create a .NET Standard class library project for the production code. I also remove the class that comes with the template.
We also need to add a project for the scenarios and the automation code. SpecFlow works with test execution frameworks in order to make the scenarios executable. It supports all well known test execution frameworks like MsTest, NUnit or xUnit. There is also a free dedicated runner developed by Tricentis called SpecFlow+ Runner. For the sake of simplicity in this course we are going to use xUnit, so I add a .NET Core xUnit Test Project to my solution.
I call our test project Shouty.Specs . Including Specs in the project name emphasizes that we are creating an executable specification. As we won’t have coded unit test in this project I remove the UnitTest1.cs
file added by the template.
To make this project a SpecFlow project, we need to add two NuGet package references.
The first is SpecFlow.xUnit. This is going to install SpecFlow for the project and configure it to work with xUnit.
At the time of the recording this leads to an error as the xUnit version used by the xUnit Test Project template is not recent enough for the latest SpecFlow version. This is something we can easily fix by updating the xUnit related packages of the specs project. In fact we can upgrade all packages in this case.
Let’s retry. Installing the SpecFlow.xUnit package is now successful.
The second package we need to add is the SpecFlow.Tools.MsBuild.Generation package. This will instruct SpecFlow to turn our scenarios into executable tests every time we build the project.
As we automate the scenarios, we will need to create class instances and call methods from the application project. To make this possible we need to add a reference to the SpecFlow project pointing to the Shouty
project.
Let’s have a quick look at the project file of the Specs
project. If we did everything well, our project file should look like this. As this is a .NET Core project, we could actually achieve the same outcome just by adding these lines to the project file manually. That probably would have been easier. Maybe next time.
The SpecFlow project will contain the feature file, the automation code, and some other files necessary for the automation infrastructure. Adding all these into the root folder of the project would be quite messy. Teams that work with SpecFlow usually follow some conventions in order to structure their SpecFlow projects. If you have used any Cucumber-family tools before, these conventions will be familiar for you.
To achieve that, let’s create three folders. One, called Features
where we will store… well the feature files, I guess. The second folder that we usually have is called StepDefinitions
. This will be the container for our automation code. And finally we also create a third folder called Support
where we can store any files related to the supporting infrastructure.
Nice!
Let’s verify out setup by building the solution.
The build succeeded so now we’re ready to create our first feature file.
In this course we are going to use Cucumber Expressions that are explained in detail in Chapter 3. In SpecFlow version 3.1 that we use Cucumber Expressions are not supported by default. You can enable this feature by adding the CucumberExpressions.SpecFlow.3-1
NuGet package to the project, like you can see in the project file. We also specified the exact SpecFlow package version explicitly to avoid the version compatibility warning. With later SpecFlow versions, these additions won’t be needed.
2.4. Add a scenario, wire it up
Let’s create our first feature file using the "Add / New Item…" command. Call the file HearShout.feature
All feature files start with the keyword Feature:
followed by a name.
It’s a good convention to give it a name that matches the file name.
Feature: Hear shout
Now let’s write out our first scenario.
Scenario: Listener is within range
Given Lucy is located 15m from Sean
When Sean shouts "free bagels at Sean's"
Then Lucy hears Sean's message
This is the one where the listener is within range. Given Lucy is located 15 metres for Sean, When Sean shouts "free bagels at Sean’s", , Then Lucy hears Sean’s message.
Build the solution.
As you can see our new scenario has appeared in the "Test Explorer" window.
If you don’t have a "Test Explorer" in your Visual Studio, you can open it using the "Test Explorer" command from the "Test" menu.
Run the tests.
The test execution failed with an error message that says "No matching step definition found for one or more steps". This is because some of our steps are undefined.
Undefined steps are also higlighted in Visual Studio with an orange color.
Undefined means SpecFlow doesn’t know what to do for any of the three steps we wrote in our Gherkin scenario. It needs us to provide some step definitions.
Step definitions translate from the plain language you use in Gherkin into C# code. We write a C# method, then annotate it with a pattern using the Given, When or Then attributes provided by SpecFlow.
When SpecFlow runs a step, it looks for a step definition that matches the text in the Gherkin step. If it finds one, then it executes the code in the step definition.
If it doesn’t find one… well, you’ve just seen what happens. SpecFlow prints out some code snippets that we can use as a basis for new step definitions, but you can get the same snippets from Visual Studio, using the "Define steps…" command in the editor.
The dialog can help us to create a new class and paste the selected snippets. We’ll just call the class StepDefinitions
.
The wizard detected that we had a StepDefinitions
folder and saved the new class there. SpecFlow would find it anywhere within the same project, but it is better this way.
Now run the tests again.
This time we’ve got another error message. It says "One or more step definitions are not implemented yet".
The step definition snippets we have copied into the project have thrown a PendingStepException
that at the end caused this error.
Now our scenario and our steps are pending, meaning that we have to implement the automation code to have a fully automated scenario.
You can think of the PendingStepException
as a reminder of that.
Now that we’ve wired up our step definitions to the Gherkin steps, it’s almost time to start working on our solution. First though, let’s tidy up the generated code.
We’ll rename the int
parameter to something that better reflects its meaning. We’ll call it distance
.
We can put a breakpoint into the method and debug the scenario to see what’s happening.
When the breakpoint hits, the value of the distance
parameter is 15.
Notice that the number 15 does not appear anywhere in our C# code. The value is automatically passed from the scenario step to the step definition.
If you’re interested, that was caused by the {int}
in the step definition pattern or cucumber expression. We’ll explain these patterns in detail in a future lesson.
2.4.1. Lesson 4 - Questions
What do step definitions do? (choose one) ::
-
Provide a glossary of domain terms for your stakeholders
-
Give Cucumber/SpecFlow a way to automate your gherkin steps - TRUE
-
Add extra meaning to our Gherkin steps
-
Generate code from gherkin documents
Explanation: <java> Step definitions are Java methods that actually do what’s described in each step of a Gherkin scenario.
When it tries to run each step of a scenario, Cucumber will search for a step definition that matches. If there’s a matching step definition, it will call the method to run it. </java>
<js> Step definitions are JavaScript functions that actually do what’s described in each step of a Gherkin scenario.
When it tries to run each step of a scenario, Cucumber will search for a step definition that matches. If there’s a matching step definition, it will call the function. </js>
<ruby> Step definitions are Ruby blocks that actually do what’s described in each step of a Gherkin scenario.
When it tries to run each step of a scenario, Cucumber will search for a step definition that matches. If there’s a matching step definition, it will execute the code in the block. </ruby>
<C#> Step definitions are C# methods that actually do what’s described in each step of a Gherkin scenario.
When it tries to run each step of a scenario, SpecFlow will search for a step definition that matches. If there’s a matching step definition, it will call the method to run it. </C#>
What does it mean when Cucumber/SpecFlow says a step is Pending? (choose one) ::
-
The step took too long to execute and was terminated <java> * The step threw a
PendingException
, meaning we’re still working on implementing that step.</java> <js> * The step returned pending, meaning we’re still working on implementing that step.</js> <ruby> * The step definition threw a Pending error, meaning we’re still working on implementing that step.</ruby> <C#> ASK GASPAR </C#> -
Cucumber/SpecFlow was unable to find the step definitions
-
The scenario is passing
-
The scenario is failing
Explanation:
<java> Cucumber tells us that a step (and by inference the Scenario that contains it) is Pending when the automation code throws a PendingException.
The PendingException is a special type of exception provided by Cucumber to allow the development team to signal that automation for a step is a work in progress. This makes it possible to tell the difference between steps that aren’t finished yet and steps that are failing due to a defect in the system.
For example, when we run our tests in a Continuous Integration (CI) environment, we can choose to ignore pending scenarios. </java>
<js> Cucumber tells us that a step (and by inference the Scenario that contains it) is Pending when the automation code throws a Pending error.
This allows the development team to signal that automation for a step is a work in progress. This makes it possible to tell the difference between steps that are still being worked on and steps that are failing due to a defect in the system.
For example, when we run our tests in a Continuous Integration (CI) environment, we can choose to ignore pending scenarios. </js>
<ruby> Cucumber tells us that a step (and by inference the Scenario that contains it) is Pending when the automation code throws a Pending error.
This allows the development team to signal that automation for a step is a work in progress. This makes it possible to tell the difference between steps that are still being worked on and steps that are failing due to a defect in the system.
For example, when we run our tests in a Continuous Integration (CI) environment, we can choose to ignore pending scenarios. </ruby>
<C#> ASK GASPAR </C#>
Which of the following might you want to consider when using a snippet generated by Cucumber/SpecFlow?
-
Does the name of the method correctly describe the intent of the step? - TRUE
-
Do the parameter names correctly describe the meaning of the arguments? - TRUE
-
Does the snippet correctly automate the gherkin step as described? - FALSE
Explanation: When Cucumber/SpecFlow generates a snippet, it has no idea of the business context of the undefined step. The implementation that Cucumber/SpecFlow generates will definitely not automate what’s been written in your Gherkin - that’s up to you! Also, the name of the method and the parameters are just placeholders. It’s the job of the person writing the code to rename the method and parameters to reflect the business domain.
What’s the next step in BDD after we’ve pasted in the step definition snippet and seen it fail with a pending
status?
-
Check with our project manager about the requirement
-
Implement some code that does what the Gherkin step describes - TRUE
-
Create a test framework for modelling our application
-
Run a manual test to check what the system does
Explanation: If you read the comment in the generated snippet, Cucumber/SpecFlow is telling you to "turn the phrase above into concrete actions".
You need your step definition to call your application and do whatever the Gherkin step describes. In the case of our first step here, we want to tell the system that there are two people in certain locations.
We can use the act of fleshing out the body of our step definition as an opportunity to do some software design. We can think about what we want the interface to our system to look like, from the point of view of someone who needs to interact with it. Should we interact with it through the User Interface, or make a call to the programmer API directly? How would we like that interface to work?
We can do all of this without writing any implementation yet.
This is known as "outside-in" development. It helps us to ensure that when we do come to implementing our solution, we’re implementing it based on real needs.
2.5. Sketch out the solution
Now that we have the step definitions matching, we can start working on our solution. We like to use our scenarios to guide our development, so we’ll start designing the objects we’ll need by sketching out some code in our step definitions.
The scenario will be failing while we do this, but we should see the error messages gradually progressing as we drive out the interface to our object model.
Our next goal is for the scenario to fail because we need to implement the actual business logic. Then we can work on changing the business logic inside our objects to make it pass.
[Binding]
public class StepDefinitions
{
[Given("Lucy is located {int}m from Sean")]
public void GivenLucyIsLocatedMFromSean(int distance)
{
throw new PendingStepException();
}
[When("Sean shouts {string}")]
public void WhenSeanShouts(string p0)
{
throw new PendingStepException();
}
[Then("Lucy hears Sean's message")]
public void ThenLucyHearsSeanSMessage()
{
throw new PendingStepException();
}
}
To implement the first step, we need to create a couple of Person
objects, one for Lucy and one for Sean.
[Given(@"Lucy is located {int}m from Sean")]
public void GivenLucyIsLocatedMFromSean(int distance)
{
var lucy = new Person();
var sean = new Person();
// ...
}
Then we create the Person
class into our production project to remove the errors. To make it visible for the SpecFlow project, we need to make it public.
namespace Shouty
{
public class Person
{
}
}
In order to complete the step definition for the Given step, we need to specify the distance between Lucy and Sean.
To keep things simple, we’re going to assume all people are situated on a line: a one-dimensional co-ordinate system. We can always introduce proper geo-locations later. We’ll place Sean in the centre, and Lucy 15 metres away from Sean.
This might not be the design we’ll end up with once this is all working, but it’s a decent place to start.
We can implement our simple distance concept by introducing a MoveTo
method like this:
[Given(@"Lucy is located {int}m from Sean")]
public void GivenLucyIsLocatedMFromSean(int distance)
{
var lucy = new Person();
var sean = new Person();
lucy.MoveTo(distance);
throw new PendingStepException();
}
We have two instances of person, one representing Lucy, and one representing Sean. Then we call a method to move Lucy to the position specified in the scenario.
As this seems to be complete like this, we can remove the pending exception.
There is no MoveTo
method yet, so Visual Studio reports a compilation error. To fix it, we can create the method on the Person
class, but at this stage we don’t bother with the correct implementation. It is enough if it compiles, so an empty method is just fine for now.
namespace Shouty
{
public class Person
{
public void MoveTo(int distance)
{
}
}
}
When we run the scenario, the first step should be passing! The easiest way to see this is to open the test output by clicking on the "Open additional output for this result" link and check the "Standard Output" section. Here you can see all steps executed by SpecFlow with their results. The first step is "done" and the two others are still pending .
Given Lucy is located 15m from Sean
-> done: StepDefinitions.GivenLucyIsLocatedMFromSean(15) (0.0s)
When Sean shouts "free bagels at Sean's"
-> pending: StepDefinitions.WhenSeanShouts("free bagels at Se...")
Then Lucy hears Sean''s message
-> skipped because of previous errors
We’re making progress!
We’ll keep working like this until we see the scenario failing for the right reasons.
In the second step definition, we want to tell Sean to shout something.
In order to send instructions to Sean from the "When" step, we need to store him in an instance field, so that he’ll be accessible from all of our step definitions. Let’s move both declarations up to class level together with the initializations.
In the When
step, we’re capturing Sean’s message using this pattern that is mapped to the parameter p0
. Let’s give it a more meaningful name.
Don’t worry if the pattern sounds unfamiliar to you, we will look at that in detail in the next chapter.
And now we can tell Sean to shout the message:
[Binding]
public class StepDefinitions
{
private Person lucy = new Person();
private Person sean = new Person();
//...
[When(@"Sean shouts ""([^""]*)""")]
public void WhenSeanShouts(string message)
{
sean.Shout(message);
}
//...
}
We eliminate the compilation error by declaring the Shout
method in the Person
class.
namespace Shouty
{
public class Person
{
public void MoveTo(int distance)
{
}
public void Shout(string message)
{
}
}
}
When we run the scenario again, the second step is also passing!.
Given Lucy is located 15m from Sean
-> done: StepDefinitions.GivenLucyIsLocatedMFromSean(15) (0.0s)
When Sean shouts "free bagels at Sean's"
-> done: StepDefinitions.WhenSeanShouts("free bagels at Se...") (0.0s)
Then Lucy hears Sean''s message
-> pending: StepDefinitions.ThenLucyHearsSeanSMessage()
The last step definition is where we implement a check, or assertion. We’ll verify that what Lucy has heard is exactly the same as what Sean shouted.
Once again we’re going to write the code we wish we had. In that we are going to use an assertion from the Xunit
library, so we need to add the necessary namespace usages.
[Then(@"Lucy hears Sean's message")]
public void ThenLucyHearsSeanSMessage()
{
Assert.Contains(messageFromSean, lucy.GetMessagesHeard());
}
So we need a way to ask Lucy what messages she has heard, and we also need to know what it was that Sean shouted.
We can record what Sean shouts by storing it in an instance field as the When
step runs. This is a common pattern to use in SpecFlow step definitions when you don’t want to repeat the same test data in different parts of a scenario. Now we can use that in the assertion check.
[When(@"Sean shouts {string}")]
public void WhenSeanShouts(string message)
{
sean.Shout(message);
messageFromSean = message;
}
We also need to add a GetMessagesHeard
method to our Person class. Let’s just return null for now.
public class Person
{
public void MoveTo(int distance)
{
}
public void Shout(string message)
{
}
public IList<string> GetMessagesHeard()
{
return null;
}
}
…and watch SpecFlow run the tests again.
Given Lucy is located 15m from Sean
-> done: StepDefinitions.GivenLucyIsLocatedMFromSean(15) (0.0s)
When Sean shouts "free bagels at Sean's"
-> done: StepDefinitions.WhenSeanShouts("free bagels at Se...") (0.0s)
Then Lucy hears Sean's message
-> error: Value cannot be null. (Parameter 'collection')
This is great! Whenever we do BDD, getting to our first failing test is a milestone. Seeing the test fail proves that it is capable of detecting errors in our code!
Never trust an automated test that you haven’t seen fail!
Now all we have to do is write the code to make it do what it’s supposed to.
2.5.1. Lesson 5 - Questions
How does the practice writing a failing test before implementing the solution help us?
-
Until you see a scenario fail, you can’t be sure that it can ever fail [true]
-
There’s no need to always see a scenario fail [false]
-
BDD practitioners use failing scenarios to guide their development [true]
-
A passing scenario implies the functionality it describes has already been implemented, so it may be a duplicate of an existing scenario [true]
-
BDD practitioners believe in learning from failure [false]
Explanation: Behaviour-Driven Development comes from Test-Driven Development, where we always start with a failing test, then use that to guide our development. This sometimes described as red-green-refactor.
red - write a scenario/test and see it fail green - make it pass (as simply as possible) refactor - improve your code, while keeping all the tests/scenarios green
It’s surprisingly easy to write scenarios and step definitions that don’t do anything. It’s the transition from red to green that gives us confidence that the scenario and the implementation actually do what we expect.
If a scenario passes as soon as we write it, that means that either it’s not doing what we think it should or the behaviour that it describes has already been implemented. In that case, we’re not developing using behaviour-driven development.
Why did we change to use an instance variable for storing each Person? (select one) ::
-
It ensures we can interact with the same object from different steps. [true]
-
It’s a better way to organise the code
-
It’s more efficient for performance
-
Cucumber/SpecFlow requires us to store our objects as instance variables.
Explanation: In Cucumber/SpecFlow, one of the ways to access the same instance of an object from different step definition methods, is to store it on an instance variable.
How did we avoid having to mention the detail of the text Sean had shouted in our When and Then steps? (select one) ::
-
We duplicated the text inside our Person class
-
We used an instance variable to store the text that was shouted [true]
-
We called a method on the Person class to retrieve the messages heard
-
We passed the message text in from our Gherkin scenarios
Explanation: When you need to assert for a specific value coming out of your system in a Then step, you can use an instance variable to store it where it goes into the system (in a Given or When) step. This means you can avoid duplicating the value in multiple places in your code.
Which flow should we follow when making a Scenario pass? (select one) ::
-
Domain modelling → Write some code → Make it compile → Run the scenario & watch it fail
-
Write some code → Domain modelling → Make it compile → Run the scenario
-
Write some code → Make it compile → Domain modelling → Run the scenario - TRUE
-
Domain modelling → Run the scenario → Write some code → Make it compile
Explanation: Our goal at this stage is to get to a failing test, where the only thing left to do to make it pass is make changes to the implementation of the app itself.
On an existing system, we might not need to create so much new code to get to this goal, but we might need to make some changes to how we call the system. This gives us an opportunity to do some lightweight domain modelling.
It may not compile first-time, so we implement the bare-bones of our solution until it does.
We use the scenarios to guide us in our implementation.
2.6. Make the scenario pass
So we have our failing scenario:
Given Lucy is located 15m from Sean
-> done: StepDefinitions.GivenLucyIsLocatedMFromSean(15) (0.0s)
When Sean shouts "free bagels at Sean's"
-> done: StepDefinitions.WhenSeanShouts("free bagels at Se...") (0.0s)
Then Lucy hears Sean's message
-> error: Value cannot be null. (Parameter 'collection')
Lucy is expected to hear Sean’s message, but she hasn’t heard anything: we got null
back from the GetMessagesHeard
method.
In this case, we’re going to cheat. We have a one-line fix that will make this scenario pass, but it’s not a particularly future-proof implementation. Can you guess what it is?
public IList<string> GetMessagesHeard()
{
return new List<string> { "free bagels at Sean's" };
}
I told you it wasn’t very future proof! But let’s see what SpecFlow says to that.
Fantastic! Our scenario is passing for the first time. As long as this is the only message anyone ever shouts, we’re good to ship this thing! But I’m afraid this is not going to be the case so let’s work a bit more on it.
Now, the fact that such a poor implementation can pass all our tests shows us that we need to work on our tests. A more comprehensive set of tests would guide us towards a better implementation.
It’s a good habit to look for the most simple solution though. We can trust that, as our tests evolve, so will our solution.
Instead of writing a note on our TODO list, let’s write another test that shouts a different message. Usually we’d expect a developer to do this using a unit test, but to keep things simple in this lesson, we’re going to write another scenario.
We’ve worked hard. It’s time for a coffee, so let’s come up with an example that has Sean offering free coffee.
Feature: Hear shout
Scenario: Listener is within range
Given Lucy is located 15m from Sean
When Sean shouts "free bagels at Sean's"
Then Lucy hears Sean's message
Scenario: Listener hears a different message
Given Lucy is located 15m from Sean
When Sean shouts "Free coffee!"
Then Lucy hears Sean's message
It fails, reminding us we need to find a solution that doesn’t rely on hard-coding the message. Now when we come back to this code, we can just run the tests and SpecFlow will tell us what we need to do next. We’re done for today!
Test Name: Listener hears a different message
[...]
Result Message:
Assert.Contains() Failure
Not found: Free coffee!
In value: List<String> ["free bagels at Sean's"]
Result StandardOutput:
Given Lucy is located 15m from Sean
-> done: StepDefinitions.GivenLucyIsLocatedMFromSean(15) (0.0s)
When Sean shouts "Free coffee!"
-> done: StepDefinitions.WhenSeanShouts("Free coffee!") (0.0s)
Then Lucy hears Sean's message
-> error: Assert.Contains() Failure
Not found: Free coffee!
In value: List<String> ["free bagels at Sean's"]
Of course, if you’re in the mood, you can always try to implement a solution yourself that makes both scenarios pass. Have fun!
2.6.1. Questions
-
Why should we always make sure that we see a scenario fail before we make it pass? (select multiple)
-
Until you see a scenario fail, you can’t be sure that it can ever fail [true]
-
There’s no need to always see a scenario fail [false]
-
BDD practitioners use failing scenarios to drive their development [true]
-
A passing scenario implies the functionality it describes has already been implemented, so it may be a duplicate of an existing scenario [true]
-
BDD practitioners believe in learning from failure [false]
-
-
Why did we change to use an instance variable for storing each Person?
-
It ensures we can interact with the same object from different steps. [true]
-
It’s a better way to organise the code
-
It’s more efficient for performance
-
Cucumber requires us to store our objects as instance variables.
-
-
How did we avoid having to mention the detail of the text Sean had shouted in our When and Then steps?
-
We duplicated the text inside our Person class
-
We used an instance variable to store the text that was shouted [true]
-
We called a method on the Person class to retrieve the messages heard
-
We passed the message text in from our Gherkin scenarios
-
-
Which flow should we follow when making a Scenario pass?
-
Domain modelling → Write some code → Make it compile → Run the scenario & watch it fail
-
Write some code → Domain modelling → Make it compile → Run the scenario
-
Write some code → Make it compile → Domain modelling → Run the scenario
-
Domain modelling → Run the scenario → Write some code → Make it compile
-
-
Why is our naive implementation of Person.getMessagesHeard, with a hard-coded message OK in BDD? (select multiple)
-
It shows us that we need better examples to pin down the behaviour we really want from the code. [correct]
-
We know we will iterate on our solution, when we come up with more examples of what we want it to do. [correct]
-
Nobody is using our solution yet [incorrect]
-
We have to do a bad implementation so we can see our test fail. [incorrect]
-
-
Look at this diagram (1) Write a scenario, 2) Automate it and watch it fail, 3) Write just enough code to make it pass). Which stage are we at as the video ends?
-
1
-
2
-
3
-
3. Expressing yourself
3.1. Cucumber expressions not regular expressions
In the previous chapter we explored the fundamental components of a SpecFlow test suite, and how we use SpecFlow to drive out a solution, test-first.
First we specified the behaviour we wanted, using a Gherkin scenario in a feature file. Then we wrote step definitions to translate the plain english from our scenario into concrete actions in code. Finally, we used the step definitions to guide us in building out our very basic domain model for the Shouty application.
We tend to think of the code that actually pokes around with the system as distinct from the step definitions, so we’ve drawn an extra box labelled "automation code" for this.
Automation code can do almost anything to your application: it can drive a web browser around your site, make HTTP requests to a REST API, or — as you’ve already seen — drive a domain model directly.
Automation code is a big topic that we’ll come back to. First, we want to concentrate on step definitions. Good step definitions are important because they enable the readability of your scenarios. The better you are at matching plain language phrases from Gherkin, the more expressive you can be when writing scenarios. Teams who do this well refer to their features as living documentation - a specification document that never goes out of date.
When SpecFlow first started, we used to use regular expressions to match plain language phrases from Gherkin steps.
Regular expressions have quite an intimidating reputation.
Although regular expressions are still the default matching option, we started to adopt a simpler option that has been introduced in Cucumber. It is called Cucumber expressions.
In the SpecFlow version that we use in this course, Cucumber Expressions are not supported by default. You can enable this feature by adding the CucumberExpressions.SpecFlow.3-4
NuGet package to the project, as we described in Chapter 2.
SpecFlow is backwards compatible so you can still use the power of regular expressions if that’s your thing.
This chapter is all about Cucumber Expressions.
3.1.1. Lesson 1 - Questions (Ruby, Java, JS)
Which of the following statements are true?
-
Step definitions translate human-readable scenarios into concrete actions in code - TRUE
-
BDD practitioners think of "step definitions" and "automation code" as distinct concepts - TRUE
-
Cucumber only supports automation through the user interface - FALSE
Answer: A step definition is a piece of code that is called by Cucumber in response to a step in a scenario. You can write any code you like inside a step definition, but we’ve found it easier to maintain if we keep them short. This leads to step definitions calling dedicated automation code to perform concrete actions against the system under construction. That automation code can manipulate the user interface, make a REST call, or drive the domain model directly.
Which of the following statements are true?
-
Regular Expressions are exactly the same as Cucumber Expressions - FALSE
-
Modern versions of Cucumber only support both Cucumber Expressions and Rregular Expressions - TRUE
-
Cucumber Expressions are more intimidating than Regular Expressions - FALSE
Answer: Regular Expressions are a powerful tool that have been in use in computer science for many decades. They can be hard understand and maintain, so the Cucumber team created a simplified mechanism, called Cucumber Expressions. However, Cucumber remains backwards compatible, so you can use both Regular Expressions and Cucumber Expressions with modern releases of Cucumber.
3.1.2. Lesson 1 - Questions (SpecFlow/C#/Dotnet)
Which of the following statements are true?
-
Step definitions translate human-readable scenarios into concrete actions in code - TRUE
-
BDD practitioners think of "step definitions" and "automation code" as distinct concepts - TRUE
-
SpecFlow only supports automation through the user interface - FALSE
Answer: A step definition is a piece of code that is called by SpecFlow in response to a step in a scenario. You can write any code you like inside a step definition, but we’ve found it easier to maintain if we keep them short. This leads to step definitions calling dedicated automation code to perform concrete actions against the system under construction. That automation code can manipulate the user interface, make a REST call, or drive the domain model directly.
Which of the following statements are true?
-
Regular Expressions are exactly the same as Cucumber Expressions - FALSE
-
In modern versions of SpecFlow steps can be defined using Cucumber Expressions but this feature has to be enabled first - TRUE
-
Cucumber Expressions are more intimidating than Regular Expressions - FALSE
-
Regular expressions can still be used even in modern versions of SpecFlow even if Cucumber Expressions are enabled - TRUE
Answer: Regular Expressions are a powerful tool that have been in use in computer science for many decades. They can be hard understand and maintain, so the Cucumber team created a simplified mechanism, called Cucumber Expressions that is now also available for SpecFlow. SpecFlow remains backwards compatible, so you can use both Regular Expressions and Cucumber Expressions with modern releases of SpecFlow.
3.2. Literal expressions
Let’s look at the Shouty scenario from the last chapter.
Feature: Hear shout
Scenario: Listener is within range
Given Lucy is located 15 metres from Sean
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
As SpecFlow starts to execute this feature, it will come to the first step of the scenario Given Lucy is located 15 metres from Sean
and say to itself "now - do I have any step definitions that match the phrase Lucy is located 15 metres from Sean
?"
The most simple cucumber expression that would match that step is this one:
Lucy is located 15 metres from Sean
That’s pretty simple isn’t it? Cucumber expressions are just string patterns, and the most simple pattern you can use is a perfect match.
In C#, we can use this pattern to make a step definition like this:
[Given("Lucy is located 15 metres from Sean")]
public void GivenLucyIsLocatedMetresFromSean()
{
throw new NotImplementedException("Matched!");
}
We use a normal C# string to pass the cucumber expression to SpecFlow.
3.2.1. Lesson 2 - Questions
Which of the following Cucumber Expressions will match the step "Given Lucy is 15 metres from Sean"?
-
"lucy is 15 metres from sean" - FALSE
-
"Given Lucy is 15 metres from Sean" - FALSE
-
"Lucy is 15 metres from Sean" - TRUE
-
"Lucy is 15 metres from Sean Smith" - FALSE
Answer: Cucumber Expressions look for a match of the whole step text EXCLUDING the Gherkin keyword (Given/When/Then/And/But). The match is case sensitive and matches whitespace as well.
3.3. Capturing parameters
Sometimes, we want to write step definitions that allow us to use different values in our Gherkin scenarios. For example, we might want to have other scenarios that place Lucy a different distance away from Sean.
Feature: Hear shout
Scenario: Listener is within range
Given Lucy is located 100 metres from Sean
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
To capture interesting values from our step definitions, we can use a feature of Cucumber Expressions called parameters.
For example, to capture the number of metres, we can use the {int}
parameter: which is passed as an argument to our step definition:
[Given("Lucy is located {int} metres from Sean")]
public void GivenLucyIsLocatedMetresFromSean(int distance)
Now we’re capturing that value as an argument. The value 100
will be passed to our code automatically by SpecFlow.
Because we’ve used Cucumber Expressions' built-in {int}
parameter type, the value has been cast to a int
data type for us automatically, so we can do maths with it if we want.
[Given("Lucy is located {int} metres from Sean")]
public void GivenLucyIsLocatedMetresFromSean(int distance)
{
throw new NotImplementedException($"Lucy is {distance * 100} centimetres from Sean");
}
Cucumber Expressions have a bunch of built-in parameter types: {int}
, {float}
, {word}
and {string}
. You can also define your own, as we’ll see later.
3.3.1. Lesson 3 )Ruby, Java, JS)
Which of the following is NOT a built in Cucumber Expression parameter type?
-
float - FALSE
-
integer - TRUE
-
string - FALSE
-
word - FALSE
Answer: The Cucumber Expression parameter type that matches an integer is {int}
, not {integer}
Which of the following statements is true?
-
You cannot create your own Cucumber Expression parameter types - FALSE
-
Cucumber discards the value that matches a Cucumber Expression parameter type - FALSE
-
Your step definition code will be passed the value that matched the Cucumber Expression parameter type - TRUE
-
Cucumber always passes the matched parameter as a string - FALSE
Answer: Cucumber will pass the step definition a parameter for each Cucumber Expression parameter type. Cucumber will attempt to convert the text that matched into a suitable format. Using the {int}
parameter type will result in a number being passed to the step definition. You can extend the predefined Cucumber Expression parameter types, by creating your own.
3.3.2. Lesson 3 - Questions (SpecFlow/C#/Dotnet)
Which of the following is NOT a built in Cucumber Expression parameter type?
-
float - FALSE
-
integer - TRUE
-
string - FALSE
-
word - FALSE
Answer: The Cucumber Expression parameter type that matches an integer is {int}
, not {integer}
Which of the following statements is true?
-
You cannot create your own Cucumber Expression parameter types - FALSE
-
Cucumber discards the value that matches a Cucumber Expression parameter type - FALSE
-
Your step definition code will be passed the value that matched the Cucumber Expression parameter type - TRUE
-
SpecFlow always passes the matched parameter as a string - FALSE
Answer: SpecFlow will pass the step definition a parameter for each Cucumber Expression parameter type. SpecFlow will attempt to convert the text that matched into a suitable format. Using the {int}
parameter type will result in a number being passed to the step definition. You can extend the predefined Cucumber Expression parameter types, by creating your own.
3.4. Flexibility
Although it’s important to try to use consistent terminology in our Gherkin scenarios to help develop the ubiquitous language of your domain, we also want scenarios to read naturally, which sometimes means allowing a bit of flexibility.
Ideally, the language used in scenarios should never be constrained by your step definitions. Otherwise they’ll end up sounding like they were written by robots. Or worse, they read like code.
One common example is the problem of plurals. Suppose we want to place Lucy and Sean just 1 metre apart:
Given Lucy is located 1 metre from Sean
Because we’ve used the singular metre
instead of the plural metres
we don’t get a match as you can see from the different step color:
Given Lucy is located 1 metre from Sean
What a pain!
Fear not. We can just surround the s
in parentheses to make it optional, like this:
[Given("Lucy is located {int} metre(s) from Sean")]
public void GivenLucyIsLocatedMetresFromSean(int distance)
We build the project and now our step matches:
Given Lucy is located 1 metre from Sean
This is one way to smooth off some of the rough edges in your cucumber expressions, and allow your scenarios to be as expressive as possible.
Another is to allow alternates - different ways of saying the same thing. For example, to accept this step:
Given Lucy is standing 1 metre from Sean
…we can use this Cucumber Expression:
[Given("Lucy is located/standing {int} metre(s) from Sean")]
public void GivenLucyIsLocatedMetresFromSean(int distance)
Now we can use either 'standing' or 'located' in our scenarios, and both will match just fine as you can see…
3.4.1. Lesson 4
How can you express in a Cucumber Expression that matching some text is optional?
-
Enclose it in square brackets: [] - FALSE
-
Enclose it in parentheses: () - TRUE
-
Place a question mark after it: ? - FALSE
-
Precede it with a slash: / - FALSE
Answer: Any text in a Cucumber Expression that is surrounded by parentheses ()
is considered optional.
What does a slash /
separating words mean in a Cucumber Expression?
-
The words are considered alternatives - the Cucumber Expression will match any of them - TRUE
-
It doesn’t mean anything special - the Cucumber Expression will match the slash as a literal character- FALSE
-
The word that follows the slash is considered optional - FALSE
Answer: Words in a Cucumber Expression that are separated by a slash /
are considered alternates. There must be no whitespace between the word and the slash.
Which of the following Cucumber Expressions would match both "it weighed 3 grammes" and "it weighed 1 gramme"?
-
"it weighed {int} gramme(s)" - TRUE
-
"it weighed 1/3 gramme/s" - FALSE
-
"it weighed 1/3 gramme(s)" - TRUE
-
"it weighed 1 / 3 gramme(s)" - FALSE
-
"it weighed 1/2/3 gramme/grammes" - TRUE
Answer: Any text surrounded by parentheses ()
is considered optional. Any words separated by a slash /
are considered to be alternates. You can find full documentation about Cucumber Expressions at https://cucumber.io/docs/cucumber/cucumber-expressions/
3.5. Custom parameters
Although you can get a long way with Cucumber Expressions' built-in parameter types, you get real power when you define your own custom parameter types. This allows you to transform the text captured from the Gherkin into any object you like before it’s passed into your step definition.
For example, let’s define our own {person}
custom parameter type that will convert the string Lucy
into an instance of Person
automatically.
We can start with the step definition, which would look something like this:
[Given("{Person} is located/standing {int} metre(s) from Sean")]
public void GivenPersonIsLocatedMetresFromSean(Person person, int distance)
{
person.MoveTo(distance);
}
If we build or run the tests at this point we’ll see an error, because we haven’t defined the {Person}
parameter type yet.
Undefined parameter type: 'Person'
Here’s how we define one.
Let’s create a new class called ParameterTypes
in the Support
folder:
We’re going to create a method, which takes the name of a person as a string, and returns an instance of our Person
class with that name.
public Person ConvertPerson(string name)
{
return new Person(name);
}
SpecFlow will use the name of the return type - Person
- as the parameter name we use inside the curly brackets in our step definition expressions, as soon as we’ve wired it up.
To do that, we add the StepArgumentTransformation
attribute , that is from the SpecFlow namespace… . We also need to add a Binding
attribute to the class , otherwise SpecFlow will not find our conversion method.
By default, any name is recognized as a person, but we could restrict it to a specific names using - gasp! - a regular expression.
This is not what we want in this project now, so let’s leave it as it was. You can find more examples about using StepArgumentTransformation
in the SpecFlow documentation.
All of this means that when we run our step, we’ll be passed an instance of Person
into our step definition automatically.
Custom parameters allow you to bring your domain model - the names of the classes and objects in your solution - and your domain language - the words you use in your scenarios and step definitions - closer together.
3.5.1. Lesson 5 - Questions (Java)
What role do Regular Expressions play in Cucumber Expressions?
-
None
-
Cucumber Expressions provide a subset of Regular Expression syntax
-
Cucumber Expressions are exactly the same as Regular Expressions
-
A Regular Expression is used to define the text to be matched when using a custom Parameter Type - TRUE
Answer: We use a Regular Expression to specify the text that should be matched when a custom Parameter Type is used in a Cucumber Expression.
How would you use the custom Parameter Type defined by the following code?
@ParameterType("activated") public Status state(String activationState) { return new Status(activationState); }
-
{activated}
-
{activationState}
-
{state} - TRUE
-
{Status}
Answer: The name of a custom Parameter Type is defined by the name of the method that is decorated with the @ParameterType
annotation.
3.5.2. Lesson 5 - Questions (Javascript)
What role do Regular Expressions play in Cucumber Expressions?
-
None
-
Cucumber Expressions provide a subset of Regular Expression syntax
-
Cucumber Expressions are exactly the same as Regular Expressions
-
A Regular Expression is used to define the text to be matched when using a custom Parameter Type - TRUE
Answer: We use a Regular Expression to specify the text that should be matched when a custom Parameter Type is used in a Cucumber Expression.
How would you use the custom Parameter Type defined by the following code?
defineParameterType({ name: 'state', regexp: /activated/, transformer: activationState ⇒ new Status(activationState) })
-
{activated}
-
{activationState}
-
{state} - TRUE
-
{Status}
Answer: The name of a custom Parameter Type is defined by the name
parameter passed to the defineParameterType
method.
3.5.3. Lesson 5 - Questions (Ruby)
What role do Regular Expressions play in Cucumber Expressions?
-
None
-
Cucumber Expressions provide a subset of Regular Expression syntax
-
Cucumber Expressions are exactly the same as Regular Expressions
-
A Regular Expression is used to define the text to be matched when using a custom Parameter Type - TRUE
Answer: We use a Regular Expression to specify the text that should be matched when a custom Parameter Type is used in a Cucumber Expression.
How would you use the custom Parameter Type defined by the following code?
ParameterType( name: 'state', regexp: /activated/, transformer: → (activationState) { Status.new(activationState) } )
-
{activated}
-
{activationState}
-
{state} - TRUE
-
{Status}
Answer: The name of a custom Parameter Type is defined by the name
parameter passed to the ParameterType
method.
3.5.4. Lesson 5 - Questions (SpecFlow/C#/Dotnet)
What role do Regular Expressions play in Cucumber Expressions?
-
None
-
Cucumber Expressions provide a subset of Regular Expression syntax
-
Cucumber Expressions are exactly the same as Regular Expressions
-
A Regular Expression is used to restrict the text to be matched when using a custom parameter type (StepArgumentTransformation) - TRUE
Answer: We use a Regular Expression to restrict the text that should be matched when a custom parameter type (StepArgumentTransformation) is used in a Cucumber Expression. You can find more examples of how to use StepArgumentTransformation
in the SpecFlow documentation.
How would you use the custom Parameter Type defined by the following code?
public Status ConvertState(string activationState) { return new Status(activationState); }
-
{activated} or {deactivated}
-
{activationState}
-
{Status} - TRUE
-
{ConvertState}
Answer: The name of a custom Parameter Type is defined by the name of the return type in the method that is decorated with the [StepArgumentTransformation]
annotation.
4. Cleaning up
4.1. The importance of readability
In the previous chapter, we talked about the importance of having readable scenarios, and you learned some new skills with Cucumber Expressions to help you achieve that goal. Those skills will give you the confidence to write scenarios exactly the way you want, knowing you’ll be able to match the Gherkin steps easily from your step definition code.
We emphasise readability because from our experience, writing Gherkin scenarios is a software design activity. Cucumber was created to bridge the communication gap between business domain experts and development teams. When you collaborate with domain experts to describe behaviour in Gherkin, you’re expressing the group’s shared understanding of the problem you need to solve. The words you use in your scenarios can have a deep impact on the way the software is designed, as we’ll see in later chapters.
The more fluent you become in writing Gherkin, the more useful a tool it becomes to help you facilitate this communication. Keeping your scenarios readable means you can get feedback at any time about whether you’re building the right thing. Over time, your features become living documentation about your system. We can’t emphasize enough how important it is to see your scenarios as more than just tests.
Maintaining a living document works both ways: the scenarios will guide your solution design, but you may also have to update your Gherkin to reflect the things you learn as you build the solution. This dance back and forth between features and solution code is an important part of BDD.
In this chapter, we’ll learn about feature descriptions, the Background keyword, and about keeping scenarios and code up-to-date with your current understanding of the project.
First, let’s catch up with what’s been happening on the Shouty project.
4.1.1. Continuity Annoucement
Before we start, I need to explain about a continuity error between the previous chapter and this next one.
In the last chapter we showed you how to use parameter types to automatically create an instance of our Person
class whenever we used it in a step defintion.
Now the first version of video series was first created many years ago, before we had added parameter types to Cucumber. Although we updated the previous chapter to demonstrate parameter types to you, we haven’t yet updated this one. So you’ll notice as you follow along here that there’s no mention of parameter types anymore.
Some of the things we’ll be doing to clean up the code in this chapter would be even cleaner if we used parameter types, and we hope to update this video someday to incorporate them into the story. In the meantime we’ll leave it as an exercise for you to think about how you would change the work we do in this episode to make the most of them.
Have fun, and don’t forget to come on the #school community Slack channel to ask if you need any guidance!
4.1.2. Lesson 1 - Questions (Ruby, Java, JS)
Which aspects of Cucumber help bridge the communication gap between business domain experts and development teams?
-
The readability of Gherkin scenarios - TRUE
-
Cucumber’s availability for different programming languages - FALSE
-
Being able to express scenarios using your own domain language - TRUE
Answer: The feature files that Cucumber understands are written using Gherkin, so that you can create scenarios that utilise your own domain language, so that they can be read and understood by everyone involved in specifying and delivering your software.
How do Cucumber feature files differ from more traditional automated tests?
-
The purpose of feature files is to create readable specifications that can be understood by the whole team, not to provide test coverage
-
Business-readable specifications make it easier to obtain feedback about what you’re building while you’re building it, rather than waiting for a later test cycle
-
Feature files become "living documentation" when they are automated, providing a single source of truth for the whole team
-
Feature files should be written collaboratively by business and delivery, not in isolation by testers
-
There is no difference - FALSE
Answer: BDD is the collaborative approach to developing software that Cucumber was created to support. Although Cucumber scenarios do act as tests when they are automated, this is not their primary purpose. Their primary purpose is to provide a single, shared specification, written in the domain language of your business — facilitating collaboration, feedback, and reliable documentation. The primary purpose of traditional automated tests, on the other hand, is to check that the software behaves as expected.
4.1.3. Lesson 1 - Questions (SpecFlow/C#/Dotnet)
Which aspects of SpecFlow help bridge the communication gap between business domain experts and development teams?
-
The readability of Gherkin scenarios - TRUE
-
Gherkin scenarios can be automated in different programming languages - FALSE
-
Being able to express scenarios using your own domain language - TRUE
Answer: The feature files that SpecFlow understands are written using Gherkin, so that you can create scenarios that utilise your own domain language, so that they can be read and understood by everyone involved in specifying and delivering your software.
How do SpecFlow feature files differ from more traditional automated tests?
-
The purpose of feature files is to create readable specifications that can be understood by the whole team, not to provide test coverage
-
Business-readable specifications make it easier to obtain feedback about what you’re building while you’re building it, rather than waiting for a later test cycle
-
Feature files become "living documentation" when they are automated, providing a single source of truth for the whole team
-
Feature files should be written collaboratively by business and delivery, not in isolation by testers
-
There is no difference - FALSE
Answer: BDD is the collaborative approach to developing software that SpecFlow was created to support. Although SpecFlow scenarios do act as tests when they are automated, this is not their primary purpose. Their primary purpose is to provide a single, shared specification, written in the domain language of your business — facilitating collaboration, feedback, and reliable documentation. The primary purpose of traditional automated tests, on the other hand, is to check that the software behaves as expected.
4.2. Review changes that happened while we were away
While we were away, the developers of Shouty have been busy working on the code. Let’s have a look at what they’ve been up to.
We’ll start out by running our scenarios.
Great! It looks like both these scenarios are working now - both the different messages that Sean shouts are being heard by Lucy.
Let’s dig into the code and see how these steps have been automated.
[Binding]
public class StepDefinitions
{
private Person lucy;
private Person sean;
private string messageFromSean;
[Given("Lucy is {int} metres from Sean")]
public void GivenLucyIsMetresFromSean(int distance)
{
var network = new Network();
sean = new Person(network);
lucy = new Person(network);
lucy.MoveTo(distance);
}
[When("Sean shouts {string}")]
public void WhenSeanShouts(string message)
{
sean.Shout(message);
messageFromSean = message;
}
[Then("Lucy should hear Sean's message")]
public void ThenLucyShouldHearSeansMessage()
{
Assert.Contains(messageFromSean, lucy.GetMessagesHeard());
}
}
In the step definition layer, we can see that a new class has been defined, the Network. We’re creating an instance of the network here. Then we pass that network instance to each of the Person instances we create here. So both instances of Person depend on the same instance of network. The Network is what allows people to send messages to one another.
There are also a couple of new unit test classes in the Shouty solution, one for the Network class, and another one for the Person class.
Unit tests are fine-grained tests that define the precise behaviour of each of those classes. We’ll talk more about this in a future lesson, but feel free to have a poke around in there in the meantime.
The Run All Tests command will run those unit tests as well as the SpecFlow scenarios.
The first thing I notice coming back to the code is that the feature file is still talking about the distance between Lucy and Sean, but we haven’t actually implemented any behaviour around that yet.
Feature: Hear shout
Scenario: Listener is within range
Given Lucy is 15 metres from Sean
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
Scenario: Listener hears a different message
Given Lucy is 15 metres from Sean
When Sean shouts "Free coffee!"
Then Lucy should hear Sean's message
This happens to us all the time - we have an idea for a new feature, but then we find the problem is too complex to solve all at once, so we break it down into simpler steps. If we’re not careful, little bits of that original idea can be left around like clutter, in the scenarios and in the code. That clutter can get in the way, especially if plans change.
We’re definitely going to develop this behaviour, but we’ve decided to defer it to our next iteration. Our current solution is just focussed on broadcasting messages between the people on the network.
Let’s clean up the feature to reflect that current understanding.
4.2.1. Lesson 2 - Questions (Ruby, Java, JS)
Why have the Shouty developers created unit tests for the Person and Network classes?
-
They don’t understand how to do BDD - FALSE
-
Unit tests are fine-grained tests that define the precise behaviour of each of those classes - TRUE
-
Unit tests run faster than Cucumber scenarios - FALSE
Answer: Unit tests (also known as programmer tests) are used to define precise behaviour of units of code that may not be interesting to the business — and so should not be written in a feature file. Writing unit tests is entirely compatible with BDD.
There is no reason for Cucumber scenarios to run significantly slower than unit tests. The Shouty step definitions that we’ve seen so far interact directly with the domain layer and run extremely fast.
Why is the distance between Sean and Lucy not being used by Shouty?
-
The team has decided to defer implementing range functionality until a later iteration - TRUE
-
The developers have misunderstood the specification
-
The specification has changed since the scenarios were written
-
The distance between Sean and Lucy is being used to decide if the shout is "in range"
Answer: Teams often find that the problem is too big to solve all at once, so we split it into thinner slices. Working in smaller steps is a faster, safer way of delivering software. In this case the team has decided that broadcasting messages and calculating if a person is in-range are different problems that they will address separately.
4.2.2. Lesson 2 - Questions (SpecFlow)
Why have the Shouty developers created unit tests for the Person and Network classes?
-
They don’t understand how to do BDD - FALSE
-
Unit tests are fine-grained tests that define the precise behaviour of each of those classes - TRUE
-
Unit tests run faster than Cucumber scenarios - FALSE
Answer: Unit tests (also known as programmer tests) are used to define precise behaviour of units of code that may not be interesting to the business — and so should not be written in a feature file. Writing unit tests is entirely compatible with BDD.
There is no reason for SpecFlow scenarios to run significantly slower than unit tests. The Shouty step definitions that we’ve seen so far interact directly with the domain layer and run extremely fast.
Why is the distance between Sean and Lucy not being used by Shouty?
-
The team has decided to defer implementing range functionality until a later iteration - TRUE
-
The developers have misunderstood the specification
-
The specification has changed since the scenarios were written
-
The distance between Sean and Lucy is being used to decide if the shout is "in range"
Answer: Teams often find that the problem is too big to solve all at once, so we split it into thinner slices. Working in smaller steps is a faster, safer way of delivering software. In this case the team has decided that broadcasting messages and calculating if a person is in-range are different problems that they will address separately.
4.3. Description field
After the feature keyword, we have space in a Gherkin document to write any arbitrary text that we like. We call this the feature’s description. This is a great place to write up any notes or other details that can’t easily be expressed in examples. You might have links to wiki pages or issue trackers, or to wireframes. You can put anything you like in here, as long as you don’t start a line with a Gherkin keyword, like “Rule:” or “Scenario:”.
In this case, we can add a high level description of the Shouty application. Because Shouty doesn’t yet filter by proximity, we can also write a todo list here so it’s clear that we do intend to get to that soon.
Feature: Hear shout
Shouty allows users to "hear" other users "shouts" as long as they are close enough to each other.
To do:
- only shout to people within a certain distance
Changing the description doesn’t change anything about how SpecFlow will run this feature. It just helps the human beings reading this document to understand more about the system you’re building.
Our two scenarios are examples of how Shouty can broadcast a shout to other users. This is one of the main business rules, which we can document using the Rule keyword. We’ll learn more about this in a later chapter.
Rule: Shouts can be heard by other users
Scenario: Listener is within range
Given Lucy is 15 metres from Sean
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
Scenario: Listener hears a different message
Given Lucy is 15 metres from Sean
When Sean shouts "Free coffee!"
The step “Given Lucy is 15 metres from Sean” is misleading, since the distance between the two people is not yet relevant in our current model.
[Given("Lucy is {int} metres from Sean")]
The step definition calls the MoveTo
method on Person,
lucy.MoveTo(distance);
but the MoveTo method doesn’t actually do anything.
public void MoveTo(int distance)
{
}
Let’s simplify this code to do just what it needs to do right now, and no more. We can start from the scenario by changing this single step to express what’s actually going on. We’ll work on one scenario at a time, and update the other one once we’re happy with this one.
Scenario: Listener hears a message
Given a person named Lucy
And a person named Sean
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
Now the scenario names make sense, and we have two steps, each creating a person. Notice we’re starting to reveal some more of our domain language here: we’ve introduced the words Person
and name
. Person is already a part of our domain language, so it’s nice to have that revealed in the language of the scenario. Name may well become a property of our person soon, so it’s also useful to have that surfaced so we can get feedback about it from the team.
One thing we’ve lost by doing this is the idea that, eventually, the two people will need to be close to each other for the message to be transmitted. We definitely wouldn’t remove detail like that without discussing it with the other people who were involved in writing and reviewing this scenario.
In this case, as well as adding it to the TODO list above, we’ve decided to document the range rule, and write a couple of new empty scenarios to remind us to implement that behaviour later.
Rule: Shouts should only be heard if listener is within range
Scenario: Listener is within range
Scenario: Listener is out of range
Let’s press on. We can invoke the Define steps… command to generate new step definition snippets for the new steps. Copy them to clipboard and paste them into our steps file.
[Given("a person named Lucy")]
public void GivenAPersonNamedLucy()
{
throw new PendingStepException();
}
[Given("a person named Sean")]
public void GivenAPersonNamedSean()
{
throw new PendingStepException();
}
In the next lesson we’ll look at a couple of ways that we can implement these new step definitions.
4.3.1. Lesson 3 - Questions
What is a feature file description?
-
Any lines of text between the feature name and the first rule or scenario - TRUE
-
A line of text that starts with the
#
character -
A block of text introduced by the
Description:
keyword
Answer: You can add a free text description of a feature file after the Feature:
line that defines the feature’s name. The description can be any number of lines long. The description continues until the first rule, scenario, or scenario outline is encountered.
What is the purpose of writing an empty scenario?
-
It is not valid Gherkin syntax to write an empty scenario
-
Empty scenarios act as a reminder that we have more work to do - TRUE
-
Empty scenarios are a way of pretending that we have done more work than we actually have
Answer: Cucumber treats empty scenarios as work that needs to be done and reports them as pending.
4.4. The "Before" hook
We now have two step definitions to implement, and that presents us with a bit of a problem. We need the same instance of Network available in both. We could just assume that the Lucy step will always run first and create it there, but that seems fragile. If someone wrote a new scenario that didn’t create people in the right order, they’d end up with no Network instance, and weird bugs. We want our steps to be as independent as possible, so they can be easily composed into new scenarios.
[Given("a person named Lucy")]
public void GivenAPersonNamedLucy()
{
network = new Network();
lucy = new Person(network);
}
[Given("a person named Sean")]
public void GivenAPersonNamedSean()
{
sean = new Person(network);
}
There are a couple of different ways to create this network instance in C#. The most straightforward is to use a network field and initialize it in the declaration of the StepDefinitions
class. Every time SpecFlow runs a scenario it creates a new instance of this class, so we’ll get a fresh instance of the Network for each scenario.
public class StepDefinitions
{
private Person lucy;
private Person sean;
private string messageFromSean;
private Network network = new Network();
As an alternative, that can be useful if you have more complex setup to do, you can use a hook.
We need an instance of Network in every scenario, so we can declare a BeforeScenario
Hook that creates one before each scenario starts, like this:
Now we can use that Network instance as we create Lucy and Sean in these two new steps.
[BeforeScenario]
public void CreateNetwork()
{
network = new Network();
}
[Given("a person named Lucy")]
public void GivenAPersonNamedLucy()
{
lucy = new Person(network);
}
[Given("a person named Sean")]
public void GivenAPersonNamedSean()
{
sean = new Person(network);
}
It should be working again now. Let’s run the tests to check.
Good. Let’s do the same with the other scenario.
Scenario: Listener hears a different message
Given a person named Lucy
And a person named Sean
When Sean shouts "Free coffee!"
Then Lucy should hear Sean's message
Now we can remove this old step definition.
[Given("Lucy is {int} metres from Sean")]
public void GivenLucyIsMetresFromSean(int distance)
{
var network = new Network();
sean = new Person(network);
lucy = new Person(network);
lucy.MoveTo(distance);
}
We know we’ll need something like this in the future when we implement the proximity rule, but we don’t want to second-guess what that code will look like, so let’s clean it out for now.
using System;
using TechTalk.SpecFlow;
using Xunit;
namespace Shouty.Specs.StepDefinitions
{
[Binding]
public class StepDefinitions
{
private Person lucy;
private Person sean;
private string messageFromSean;
private Network network;
[BeforeScenario]
public void CreateNetwork()
{
network = new Network();
}
[Given("a person named Lucy")]
public void GivenAPersonNamedLucy()
{
lucy = new Person(network);
}
[Given("a person named Sean")]
public void GivenAPersonNamedSean()
{
sean = new Person(network);
}
[When("Sean shouts {string}")]
public void WhenSeanShouts(string message)
{
sean.Shout(message);
messageFromSean = message;
}
[Then("Lucy should hear Sean's message")]
public void ThenLucyShouldHearSeansMessage()
{
Assert.Contains(messageFromSean, lucy.GetMessagesHeard());
}
}
}
Now we have one last bit of dead code left, the MoveTo
method on Person.
Let’s clean that up too.
using System;
using System.Collections.Generic;
namespace Shouty
{
public class Person
{
private readonly Network network;
private readonly List<string> messagesHeard = new List<string>();
public Person(Network network)
{
this.network = network;
network.Subscribe(this);
}
public void Shout(string message)
{
network.Broadcast(message);
}
public IList<string> GetMessagesHeard()
{
return messagesHeard;
}
public void Hear(string message)
{
messagesHeard.Add(message);
}
}
}
And we’re still green!
4.4.1. Lesson 4 - Questions
When does a BeforeScenario hook run?
-
Before every run of SpecFlow
-
Before the first scenario in each feature file
-
Before each scenario - TRUE
-
Before each step in a scenario
Answer: A BeforeScenario hook runs before each scenario. Since there is no way to tell if a hook exists by looking at the feature file, you should only use hooks for performing actions that you don’t expect the business to provide feedback on.
You can read more about hooks at https://docs.specflow.org/projects/specflow/en/latest/Bindings/Hooks.html
Why isn’t it a good idea to create a Network instance in the same step definition where we create Lucy?
-
It is a good idea
-
Steps should be independent and composable. If the Network is only created when Lucy is created, future scenarios will be forced to create Lucy - TRUE
-
We’ll need to create another Network instance when we create Sean
Answer: Every person needs to share the same Network instance, which means we need to create the Network before we create any people. By creating the Network instance in the same step definition that we create Lucy, we are forcing people to: * create Lucy — even if the scenario doesn’t need Lucy * create Lucy before any other person — because otherwise Network will not have been created yet
4.5. Create Person in a generic stepdef
OK, so we’ve cleaned things up a bit, to bring the scenarios, the code and our current understanding of the problem all into sync. What’s nice to see is how well those new steps that create Lucy and Sean match the code inside the step definition.
When step definitions have to make a big leap to translate between our plain-language description of the domain in the Gherkin scenario, and the code, that’s usually a sign that something is wrong. We like to see step definitions that are only one or two lines long, because that usually indicates our scenarios are doing a good job of reflecting the domain model in the code, and vice-versa.
One problem that we still have with these scenarios is that we’re very fixed to only being able to use these two characters, Lucy and Sean. If we want to introduce anyone else into the scenario, we’re going to be creating quite a lot of duplicate code. In fact, the two step definitions for creating Lucy and Sean are almost identical, apart from those instance fields.
On a real project we wouldn’t bother about such a tiny amount of duplication at this early stage, but this isn’t a real project! Let’s play with the skills we learned in the last chapter to make a single step definition that can create Lucy or Sean.
The first problem we’ll need to tackle is these hard-coded instance field names.
We can use a Dictionary
to store all the people involved in the scenario.
Let’s try replacing Lucy first.
We’ll start by creating a new Dictionary
in the before hook, like this.
private Dictionary<string, Person> people;
[BeforeScenario]
public void CreateNetwork()
{
network = new Network();
people = new Dictionary<string, Person>();
}
Now we can store Lucy in a key in that Dictionary. We’ll use her name as the key, hard-coding it for now.
[Given("a person named Lucy")]
public void GivenAPersonNamedLucy()
{
people.Add("Lucy", new Person(network));
}
Finally, where we check Lucy’s messages heard here in the assertion, we need to fetch her out of the Dictionary.
[Then("Lucy should hear Sean's message")]
public void ThenLucyShouldHearSeansMessage()
{
Assert.Contains(messageFromSean, people["Lucy"].GetMessagesHeard());
}
With that little refactoring done, we can now try and make this first step generic for any name.
Using your new found Cucumber expression skills from the last chapter, you’ll know that if we replace the word Lucy here with a parameter expression, we’ll have the name passed into our step definition as an argument, here. {word} is a special parameter type matches to a… word. What else. Now we can use that as the key in the Dictionary.
[Given("a person named {word}")]
public void GivenAPersonNamed(string name)
{
people.Add(name, new Person(network));
}
If we try and run the tests now, we get an error from SpecFlow about an ambiguous match.
Our generic step definition is now matching the step “a person named Sean”, but so is the original one. In bigger projects, this can be a real issue, so this warning is important.
Let’s remove the old step definition, and fetch Sean from the Dictionary here where he shouts his message.
[When("Sean shouts {string}")]
public void WhenSeanShouts(string message)
{
people["Sean"].Shout(message);
messageFromSean = message;
}
Great, we’re green again.
4.5.1. Lesson 5 - Questions (SpecFlow)
Why should a step definition be short?
-
Because the plain-language description of the domain in the Gherkin step should be close to the domain model in the code - TRUE
-
Step definitions don’t need to be short
-
SpecFlow limits the length of step definitions to five lines of code
Answer: Step definitions are a thin glue between the plain-language description in a scenario and the software that we’re building. If the business domain and the solution domain are aligned, then there should be little translation to do in the step definition.
What does it mean when SpecFlow complains about an ambiguous step?
-
SpecFlow couldn’t find a step definition that matches a step
-
SpecFlow only found one step definition that matches a step
-
SpecFlow found more than one step definition that matches a step - TRUE
Answer: If more than one step definition matches a step, then SpecFlow doesn’t know which one to call. When this ambiguity occurs, SpecFlow issues an error, rather than try to choose between the matching step definitions.
4.6. Backgrounds
Let’s switch back to the feature file to show you one more technique for improving the readability of your scenarios.
Rule: Shouts can be heard by other users
Scenario: Listener hears a message
Given a person named Lucy
And a person named Sean
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
Scenario: Listener hears a different message
Given a person named Lucy
And a person named Sean
When Sean shouts "Free coffee!"
Then Lucy should hear Sean's message
When we have common context steps - the Givens - in all the scenarios in our feature, it can sometimes be useful to get those out of the way.
We can literally move them into the background, using a Background keyword, like this:
Background:
Given a person named Lucy
And a person named Sean
As far as SpecFlow is concerned, these scenarios haven’t changed. It will still create both Lucy and Sean as the first things it does when running each of these scenarios.
But from a readability point of view, we can now see more clearly what’s important and interesting about these two scenarios - in this case, the message being shouted.
Rule: Shouts can be heard by other users
Scenario: Listener hears a message
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
Scenario: Listener hears a different message
When Sean shouts "Free coffee!"
Then Lucy should hear Sean's message
Notice we just went straight into When steps in our scenarios. That’s absolutely fine. We still have a context for the scenario, but we’ve chosen to push it off into the background.
Again, it’s debatable whether we’d bother to use a Background to do this on a real project, but this at least illustrates the technique. We rarely use Backgrounds in our projects, because although they can improve readability by removing the duplication of repeated contexts, they also harm readability by requiring people to read the Background in conjunction with each Scenario.
To maintain trust in the BDD process, it’s important to keep your features fresh. Even when you drive the development from BDD scenarios, you’ll still learn lessons from the implementation that might need to be fed back into your Gherkin documentation.
In this case, we discovered that we could find a smaller slice of this story, and defer the business rule about proximity until our next iteration. Splitting stories like this is a powerful agile technique, and one that BDD can help you to master. Now we have a clean codebase and suite of scenarios that reflects the current state of the system’s development.
We’re ready to start the next iteration.
4.6.1. Lesson 6 - Questions (SpecFlow)
What does the Gherkin keyword Background do?
-
It provides a place to write a description of why the feature is valuable
-
It is treated exactly like a scenario, but is run as soon as SpecFlow starts
-
It is treated exactly like a scenario, but is run once before any other scenario in the feature file
-
The steps from the background are run as if they were inserted at the beginning of every scenario in the feature file - TRUE
Answer: The background is used to reduce duplication in scenarios by moving steps that are common to all scenarios into a single location. The steps in the background are run before every scenario in the feature file.
There can be a maximum of one Background per feature file. A Background only affects scenarios that are in the same feature file as the Background.
How might Backgrounds decrease the readability or maintainability of a feature file?
-
Backgrounds always improve readability
-
Readability can decrease because the reader must remember the contents of the background even when reading scenarios at the end of the feature file
-
Maintainability can decrease because the maintainer must be aware that there is a background even when adding scenarios to the end of the feature file
-
Maintainability can decrease because the maintainer must be aware of the background when moving a scenario to a different feature file
Answer: Backgrounds were created to aid readability, by reducing duplication in the scenarios. Unfortunately, moving important information out of a scenario means that anyone reading or modifying a feature file must be fully aware that of the existence and content of a background. Since feature files typically contain several scenarios, that means holding two sections of the feature file in your mind at the same time, making a feature file harder to read or maintain.
5. Loops
5.1. Removing redundant scenarios
Welcome back to Cucumber School.
Feature: Shout
Shouty allows users to "hear" other users "shouts" as long as they are close enough to each other.
To do:
- only shout to people within a certain distance
Rule: Shouts can be heard by other users
Scenario: Listener hears a message
Given a person named Lucy
And a person named Sean
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
Scenario: Listener hears a different mesage
Given a person named Lucy
And a person named Sean
When Sean shouts "Free coffee!"
Then Lucy should hear Sean's message
Rule: Shouts should only be heard if listener is within range
Scenario: Listener is within range
Scenario: Listener is out of range
Last time we worked on cleaning up the Shouty features to keep them in sync with the current status of the project. We stripped the scenarios back to only specify the behaviour of passing messages between people. We made it clear that the proximity rule had not yet been implemented.
You’ll already remember from the Cucumber expressions chapter how important it is to be expressive in your scenarios, and keep them readable. In this chapter we’re going to learn some new tricks with Gherkin that will give you even more flexibility about how you write scenarios.
Once again the Shouty developers — have been hard at work implementing that proximity rule. Let’s have a look at how they got on.
Right, so those two scenarios we just left as placeholders: the one where the listener is within range, and the one where the listener is out of range are passing. Fantastic! If we look at our step definitions, we can see how they have been implemented.
Scenario: Listener is within range
Given the range is 100
And a person named Sean is located at 0
And a person named Lucy is located at 50
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
Scenario: Listener is out of range
Given the range is 100
And a person named Sean is located at 0
And a person named Larry is located at 150
When Sean shouts "free bagels at Sean's"
Then Larry should not hear Sean's message
Let’s review the changes to the feature file in more detail.
We now have four scenarios: our original two from the last time we looked at the code, and the two placeholders we wrote as reminders.
Feature: Hear shout
Shouty allows users to "hear" other users "shouts" as long as they are close enough to each other.
Rule: Shouts can be heard by other users
Scenario: Listener hears a message
Given the range is 100
And a person named Sean is located at 0
And a person named Lucy is located at 50
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
Scenario: Listener hears a different message
Given the range is 100
And a person named Sean is located at 0
And a person named Lucy is located at 50
When Sean shouts "Free coffee!"
Then Lucy should hear Sean's message
Rule: Shouts should only be heard if listener is within range
Scenario: Listener is within range
Given the range is 100
And a person named Sean is located at 0
And a person named Lucy is located at 50
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
Scenario: Listener is out of range
Given the range is 100
And a person named Sean is located at 0
And a person named Larry is located at 150
When Sean shouts "free bagels at Sean's"
Then Larry should not hear Sean's message
We used the second scenario - Listener hears a different message - to triangulate and force us to replace the hard-coded message output with a proper implementation. Now we have a domain model that uses a variable for the message, there’s an insignificant chance of this behaviour regressing, so we can safely remove the second scenario.
Scenario: Listener hears a different message
Given the range is 100
And a person named Sean is located at 0
And a person named Lucy is located at 50
When Sean shouts "Free coffee!"
Then Lucy should hear Sean's message
Keeping excess scenarios is wasteful: they clutter up your feature files, distracting your readers. When you run your features as tests, excess scenarios make them take longer to run than necessary. The one where a "listener hears a "message" is a perfectly good way of checking that the message has been sent correctly.
5.1.1. Lesson 1 - Questions
Why was it a good idea to delete the scenario?
-
It doesn’t help illustrate the rule "Shouts can be heard by other users" — TRUE
-
No one should give away free coffee
-
There should only be one scenario per rule
Explanation: We created the scenario "Listener hears a different message" to force us to replace our hard-coded implementation. Now we have a domain model that uses a variable for the message, there’s an insignificant chance of this behaviour regressing, so we can safely remove the second scenario.
Keeping excess scenarios is wasteful: they clutter up your feature files and slow down feedback.
5.2. Incidental details
The first scenario has changed since we last looked at it - it now specifies the range of a shout and the location of Sean and Lucy. This scenario exists to illustrate that a listener hears the message exactly as the shouter shouted it. All the additional details are incidental and make the scenario harder to read.
Scenario: Listener hears a message
Given the range is 100
And a person named Sean is located at 0
And a person named Lucy is located at 50
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
Let’s ensure that this scenario includes only essential information for the reader and remove all references to location and range.
Scenario: Listener hears a message
Given a person named Sean
And a person named Lucy
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
We’ll need to make changes to the step definitions to make sure that a Network class is always created - which we can do using an instance field.
private const int DEFAULT_RANGE = 100;
private string messageFromSean;
private Network network = new Network(DEFAULT_RANGE);
We’ve defaulted the range to 100.
If a scenario needs to document specific range, that can still be done by explicitly including a "Given the range is …" step.
private const int DEFAULT_RANGE = 100;
We’ll also need to add a step definition that can create a person without the scenario needing to specify where they are located. The step definition gives each person created this way a default location of 0.
[Given("a person named {word}")]
public void GivenAPersonNamed(string name)
{
people.Add(name, new Person(network, 0));
}
Let’s run the scenarios to check we haven’t broken anything… and we’re good!
Looking at the two new scenarios - Listener is within range & Listener is out of range - we can see that they also contain incidental details. Since their purpose is to illustrate the "Shouts should only be heard if listener is within range" rule, there’s no need to actually document the content of the shout.
Scenario: Listener is within range
Given the range is 100
And a person named Sean is located at 0
And a person named Lucy is located at 50
When Sean shouts "free bagels at Sean's"
Then Lucy should hear Sean's message
Scenario: Listener is out of range
Given the range is 100
And a person named Sean is located at 0
And a person named Larry is located at 150
When Sean shouts "free bagels at Sean's"
Let’s remove the details that aren’t relevant to the range rule.
Scenario: Listener is within range
Given the range is 100
And a person named Sean is located at 0
And a person named Lucy is located at 50
When Sean shouts
Then Lucy should hear a shout
Scenario: Listener is out of range
Given the range is 100
And a person named Sean is located at 0
And a person named Larry is located at 150
When Sean shouts
Then Larry should not hear a shout
Next we add a step definition that allows Sean to shout, without needing us to specify the exact message.
[When("Sean shouts")]
public void WhenSeanShouts()
{
people["Sean"].Shout("Hello, world");
}
One that allows us to check that Lucy has heard exactly one shout - because she’s in range of the shouter.
[Then("Lucy should hear a shout")]
public void ThenLucyShouldHearAShout()
{
Assert.Equal(1, people["Lucy"].GetMessagesHeard().Count);
}
And one that allows us to check that Larry hasn’t heard any messages at all - because he’s out-of-range.
[Then("Larry should not hear a shout")]
public void ThenLarryShouldNotHearAShout()
{
Assert.Equal(0, people["Larry"].GetMessagesHeard().Count);
}
And finally run all tests - and we’re still green.
That’s better. We’ve removed inessential details, so that each scenario contains only the information needed to illustrate its business rule.
The scenarios would still run green if we removed the steps that set the range of a shout , because the range already has a default value. We’re not going to, because since those scenarios are illustrating the rule that deals with the range of a shout, it’s an essential part of context for anyone reading them.
A happy side-effect is that, in order to set the range from our scenario, we’ve had to make it a configurable property of the system . So if our business stakeholders ever change their minds about the range, we won’t have to go hunting around in the code for where it’s been hard-coded.
5.2.1. Lesson 2 - Questions
Why did we remove any reference to range or location from the first scenario "Listener hears a message"?
-
They are essential to system behaviour
-
They are incidental to the rule being illustrated — TRUE
-
They were only needed to triangulate the implementation
-
They made the scenario too long
Explanation: The first scenario exists to illustrate the rule "Listener hears a message." Since the behaviour is not affected by the range of a shout, neither the range nor the distance between the shouter and the listener is relevant. The information is therefore incidental and should be omitted from the scenario.
Why do we need to know the names of people using Shouty?
-
It’s important that every person in the system has a real name
-
It’s necessary to use persona, where the shouter is called Sean and the listeners are called Lucy or Larry
-
It doesn’t matter what we call them — but the automation code does need to be able to tell them apart — TRUE
-
The automation code has been written to recognise the names Sean, Lucy, and Larry
Explanation: It’s necessary to be able to distinguish the people that are involved in the scenario. We have called them Sean, Lucy, and Larry, but we could have called them Shouter, Listener1, and Listener2 (or even User1, User2, and User3).
We find that using persona (where the name gives an indication of the person’s purpose in the scenario) can be a useful way of conveying information, without cluttering up the scenario. If the names conveyed no information at all, they would not contribute to the readability of the scenario, and could be considered incidental.
Which pieces of information are incidental in this scenario?
Rule: Offer is only valid for Shouty users Scenario: Customer is not a Shouty user Given Nora is not a Shouty user And Sean shouted "free bagels until midday" When Nora orders a bagel and a coffee at 11:00am Then she should be charged 75¢ for the bagel
-
Nora is not a Shouty user
-
Sean is offering "free bagels!"
-
Sean’s offer is only valid until midday — TRUE
-
Nora orders a bagel
-
Nora orders a coffee — TRUE
-
Nora places her order at 11:00am — TRUE
-
Nora gets charged for the bagel
-
Nora get charged 75¢ for the bagel — TRUE
Explanation: This scenario is illustrating the rule that the "offer is only valid for Shouty users". It’s therefore essential to know that Nora is not a Shouty user, because this means that she is not eligible for the offer.
We don’t need to know that Nora orders a coffee, because that has no relevance to the rule. Nor do we need to know when the offer expires, when Nora places the order, or how much she will be charged — there will be other rules (and other scenarios) that illustrate that behaviour.
Although it’s incidental that the offer is for bagels, it is necessary to illustrate that Nora has ordered the item that is on offer to Shouty users — and that she will be charged for that item. We use "bagels" as an example to make the scenario easier to read, not because there’s something inherently special about bagels!
5.3. Refactoring to Data Tables
Let’s look at the two scenarios that illustrate the rule about range again . Notice how the steps that create the Sean, Lucy, and Larry are very similar.
Rule: Shouts should only be heard if listener is within range
Scenario: Listener is within range
Given the range is 100
And a person named Sean is located at 0
And a person named Lucy is located at 50
When Sean shouts
Then Lucy should hear a shout
Scenario: Listener is out of range
Given the range is 100
And a person named Sean is located at 0
And a person named Larry is located at 150
When Sean shouts
Then Larry should not hear a shout
When we see steps like this, Gherkin’s Given When Then syntax starts to feel a bit clunky. Imagine if we could just write out a table, like this:
And people are located at
| name | location |
| Sean | 0 |
| Lucy | 50 |
Well, we’re in luck. We can!
Gherkin has a special syntax called Data Tables, that allows you to specify tabular data for a step, using pipe characters to mark the boundary between cells.
[Given("people are located at")]
public void GivenPeopleAreLocatedAt(Table table)
{
throw new PendingStepException();
}
As you can see, the step definition implicitly takes a single argument of type Table
, which is a representation of a Data Table in SpecFlow. As it represents a table of Person objects, we can rename it to personsTable
This object has a rich API for using the tabular data. The Rows
property of the table class can be used to access the data rows of the table — all rows, except the header row. A particular cell value can be retrieved from a row by using the header name. So, Lucy’s location can be accessed by getting the "name" cell of the row at index 1.
[Given("people are located at")]
public void GivenPeopleAreLocatedAt(Table personTable)
{
throw new NotImplementedException("Lucy's location: " + personTable.Rows[1]["location"]);
}
Now we can easily iterate through the rows and turn them into instances of Person:
[Given("people are located at")]
public void GivenPeopleAreLocatedAt(Table personTable)
{
foreach (var row in personTable.Rows)
{
people.Add(row["name"], new Person(network, int.Parse(row["location"])));
}
}
With that done, we can update the other scenario …
Scenario: Listener is out of range
Given the range is 100
And people are located at
| name | location |
| Sean | 0 |
| Larry | 150 |
When Sean shouts
Then Larry should not hear a shout
Now we can check that everything is still green.
and delete our old step definition, which is now unused.
[Given("a person named {word} is located at {int}")]
public void GivenAPersonNamedIsLocatedAt(string name, int location)
{
people.Add(name, new Person(network, location));
}
SpecFlow strips all the white space surrounding each cell , so we can have a nice neat table in the Gherkin but still get clean values in the step definition underneath.
Notice we’ve still had to convert the location from a string to an integer , because SpecFlow can’t know that’s the type of value in our table.
people.Add(row["name"], new Person(network, int.Parse(row["location"])));
To improve the readability and maintainability of your step definition you can have SpecFlow automatically convert the table into a list of any class you want. If our Person object had a name
field we could automatically create instances of Person from this table. But things aren’t always that simple.
Instead, we’ll define a simple Whereabouts class to represent the data in the table.
public class Whereabouts
{
public string Name { get; set; }
public int Location { get; set; }
}
We’ve made it a nested type to the step definition class, as it doesn’t form part of our core domain.
Then we can create a conversion method similarly to the ones we created in Chapter 3 that converts a Table instance to an array of Whereabouts. We have to add the [StepArgumentTransformation]
attribute to it so that SpecFlow recognizes this conversion.
[StepArgumentTransformation]
public Whereabouts[] ConvertWhereabouts(Table table)
{
return table.Rows
.Select(row => new Whereabouts
{
Name = row["name"],
Location = int.Parse(row["location"])
})
.ToArray();
}
Now, if you declare your table parameter as a Whereabouts array , SpecFlow will automatically call our conversion method.
[Given("people are located at")]
public void GivenPeopleAreLocatedAt(Whereabouts[] whereaboutsList)
{
foreach (var whereabouts in whereaboutsList)
{
people.Add(whereabouts.Name, new Person(network, whereabouts.Location));
}
}
Let’s run the scenarios to check that we’re still green. And we are!
That looks much nicer - people positioned using a table in the feature file and really clean code that creates and positions people according to the data.
5.3.1. Lesson 3 - Questions (SpecFlow)
What is the name of the Gherkin syntax that allows you to specify pipe-separated, tabular data for a step?
-
Array
-
Data Matrix
-
Data Table — TRUE
-
Example Table
-
Table
Explanation:
The Gherkin syntax is called a Data Table. It represents a 2-dimensional array, with cell boundaries indicated by pipe characters |
What value would be retrieved from cell Rows[1]["C"] in the following table?
| A | B | C | | 0 | 1 | 2 | | 3 | 4 | 5 | | 6 | 7 | 8 |
-
0
-
1
-
2
-
3
-
4
-
5 — TRUE
-
6
-
7
-
8
Explanation: SpecFlow treats the first row as the header, and the Rows property returns each subsequent row indexed, starting from 0. The cells within a row can be retrieved by indexing the row with the header name. We need two pairs of brackets, because the first selects the row and the second selects the cell within the row.
Which of the following Data Tables will this method process successfully?
public void GivenTheOrderContainsTheFollowingItems(Table orderItemsTable) { foreach (var row in orderItemsTable.Rows) { order.AddLine(row["Item Name"], int.Parse(row["Quantity"])); } }
-
| Item Name | Quantity | — TRUE | Cheese & tomato | 1 |
-
| name | quantity | | Cheese & tomato | 1 |
-
| Item Name | Quantity | — TRUE | Cheese & tomato | 1 | | Pepperoni | 1 |
-
| Item Name | Quantity | — TRUE
-
| Item Name | Quantity | Notes | — TRUE | Cheese & tomato | 1 | Extra cheese |
-
| Cheese & tomato | 1 | | Pepperoni | 1 |
-
| Quantity | Item Name | — TRUE | 1 | Cheese & tomato |
Explanation: The Rows property of the Table class contains all non-header rows in the data table. To be able retrieve a cell from the row, the header cell text must match the hard coded index strings used in the method exactly. The order of the columns is not significant and any extra columns are ignored. A Data Table with only a header row would have an empty Rows collection and hence it would not add any items to the order.
How would you need to change the step definition in order to be able to use the OrderItem list converted from a Data Table with the following conversion method
public OrderItem[] ConvertOrderItems(Table orderItemsTable) { return orderItemsTable.CreateSet<OrderItem>().ToArray(); }
-
[Given("the order contains the following items {OrderItems[]}")] public void GivenTheOrderContainsTheFollowingItems(OrderItem[] orderItems) { … }
-
[Given("the order contains the following items")] — TRUE public void GivenTheOrderContainsTheFollowingItems(OrderItem[] orderItems) { … }
-
[Given("the order contains the following items")] public void GivenTheOrderContainsTheFollowingItems(IEnumerable<OrderItem> orderItems) { … }
-
[Given("the order contains the following items")] public void GivenTheOrderContainsTheFollowingItems(Table orderItemsTable) { … }
Explanation: In order to access the Data Table attached to the step, you don’t have to add parameters to the Cucumber Expression. The parameters of the expression refer to the parts of the step text only. The parameter type of the method should be exactly the same as the return type of the conversion method. So even though that the array implements the IEnumerable interface, SpecFlow will not find that. If the parameter is defined as a Table, the conversion method will not be invoked.
5.4. Deeper into Data Tables
[Given("people are located at")]
public void GivenPeopleAreLocatedAt(Whereabouts[] whereaboutsList)
{
foreach (var whereabouts in whereaboutsList)
{
people.Add(whereabouts.Name, new Person(network, whereabouts.Location));
}
}
We separated the Data Table conversion from the step definition. The step definition is nice and clean now, but our conversion method is still fairly complex as it needs to handle the table headers and the cell data conversion.
[StepArgumentTransformation]
public Whereabouts[] ConvertWhereabouts(Table table)
{
return table.Rows
.Select(row => new Whereabouts
{
Name = row["name"],
Location = int.Parse(row["location"])
})
.ToArray();
}
But even though it is complex, we can notice a pattern in it. We take the "name" cell and update the Name
property of the Whereabouts object. We take the "location" cell and update the Location
property… This is not a big surprise for us, because we let the domain terms in the scenarios drive our domain model.
When this happens, this sort of consistency allows us to simplify the code further. SpecFlow defines an extension method on the Table class, called CreateSet
. CreateSet
can do exactly what we need here: create instances of a particular class and update its properties based on the table cells.
[StepArgumentTransformation]
public Whereabouts[] ConvertWhereabouts(Table table)
{
return table.CreateSet<Whereabouts>().ToArray();
}
The CreateSet
extension method is defined in the TechTalk.SpecFlow.Assist
namespace, so we have to add a using statement for that.
using TechTalk.SpecFlow.Assist;
With that change our conversion method is also clean and our tests still pass.
Data tables are very useful for setting up data in Given steps, but you can also use them for specifying outcomes.
One rule that we’ve been implying, but have never actually explored with an example, is that people can hear more than one shout. So far we’ve only specified a single message, so let’s try writing a scenario where Sean shouts more than once:
Rule: Listener should be able to hear multiple shouts
Scenario: Two shouts
Given a person named Sean
And a person named Lucy
When Sean shouts "Free bagels!"
And Sean shouts "Free toast!"
Then Lucy hears the following messages:
| message |
| Free bagels |
| Free toast |
See how natural it is to use a Data Table here?
So how do we implement this step definition? First, let’s paste the generated snippet into the StepDefinitions class and rename the parameter. What we want to do in this step definition is compare the messages that are actually heard with the messages we expected to hear. Getting the actual messages is easy, we just need to call the GetMessagesHeard
method. For the expected messages, we need to take the "message" cell from each table row like this. And finally we can make sure they are the same using an assert statement.
[Then("Lucy hears the following messages:")]
public void ThenLucyHearsTheFollowingMessages(Table expectedMessagesTable)
{
var actualMessages = people["Lucy"].GetMessagesHeard();
var expectedMessages = expectedMessagesTable.Rows.Select(r => r["message"]);
Assert.Equal(expectedMessages, actualMessages);
}
Oops! It looks like there is a problem. The two lists are different. By checking the error message we can see that there is a typo in our scenario. But let’s not fix this yet. Instead, let’s look at another typical pattern for a data table related assertion.
The GetMessagesHeard
method returned a list of strings, but in many cases the list that we want to compare the Data Table with is a list of objects. Let’s imagine that later we will extend the GetMessagesHeard
method to not only return the message but also the person who shouted it. To simulate that let’s add an alternative version of GetMessagesHeard
, called GetMessagesHeardEx
. The new method returns a list of HeardMessage
instances. The HeardMessage class has only one property, the message, but later we might extend it.
public class HeardMessage
{
public string Message { get; set; }
}
public IList<HeardMessage> GetMessagesHeardEx()
{
return messagesHeard
.Select(m => new HeardMessage {Message = m})
.ToArray();
}
Let’s first change the part of the step definition that calculates the actual messages by calling our extended method.
var actualMessages = people["Lucy"].GetMessagesHeardEx();
To compare this with the expected messages in the table, we need to check if the "message" cell matches the Message
property for each row. If there was a "shouter name" cell we would have to match that to a ShouterName
property and so on. This is the same consistency that we had when we used CreateSet. For such assertions, we can use another helper method, called CompareToSet
.
It works similarly to CreateSet. The CompareToSet is an extension method on the Table class from the Assist namespace, and you need to pass the list of objects that you would like to compare the table with.
[Then("Lucy hears the following messages:")]
public void ThenLucyHearsTheFollowingMessages(Table expectedMessagesTable)
{
var actualMessages = people["Lucy"].GetMessagesHeardEx();
expectedMessagesTable.CompareToSet(actualMessages);
}
Both CreateSet and CompareToSet can be further customized, but this is just enough for us. Let’s run the tests and look at the error message.
Oh! We’ve found the typo: we should have included exclamation marks on the expected messages. Well, at least this gives you a chance to see the nice diff output from CompareToSet when the table and the list are different. We see the expected values prefixed with a minus, and the actual values prefixed with a plus.
Let’s fix just one of these so you can see how the diff output changes.
| Free bagels! |
The matching bagels! line no longer has a minus, and for the mismatched row, the actual value still has a minus, and the expected value has a plus.
Let’s fix this last typo , and we should be green again.
| Free toast! |
Great.
5.4.1. Lesson 4 - Questions (SpecFlow)
Which statements are true for the CreateSet method
-
CreateSet is an extension method that extends the Table class — TRUE
-
CreateSet updates the properties based on the column headers with a case-sensitive exact match
-
When succeeds, CreateSet returns an enumerable that has exactly as many items as many non-header row the Data Table had — TRUE
-
To be able to use the CreateSet extension method, the TechTalk.SpecFlow.Assist namespace has to be listed among the using statements of the file — TRUE
-
CreateSet can detect the type of the objects to be created based on the Data Table header
Explanation: CreateSet is an extension method for the Table class that is defined in the TechTalk.SpecFlow.Assist namespace. The target type has to be specified as a generic type parameter. The fields and properties of the target type are found using case-insensitive match. Whitespace in the header name is also ignored, so cells with the header "shouter name" will update the property ShouterName. CreateSet returns an object for each non-header row in the Data Table. By default it creates the objects using their default constructor, but you can also specify a delegate to create the instance differently.
What extension method on the Table class compares the table with a list of objects and produces a textual output showing their differences?
-
Compare
-
AssertEqual
-
CompareToList
-
CompareToSet — TRUE
-
Covariance
Explanation: The method that compares two data tables is called CompareToSet. It is defined in the TechTalk.SpecFlow.Assist namespace.
5.5. DocString
When writing scenarios, occasionally we want to use a really long piece of data.
For example, let’s introduce a new rule about the maximum length of a message
Rule: Maximum length of message is 180 characters
…and add a scenario to illustrate it , making the string just over the boundary of the rule:
Scenario: Message is too long
Given a person named Sean
And a person named Lucy
When Sean shouts "123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890x"
Then Lucy should not hear a shout
That’s pretty ugly isn’t it!
Still, we’ll press on and get it to green, then we’ll show you how to clean it up.
Our existing step definition handles that ugly step with the long message just fine, but the last outcome step is undefined . We could either add a new step definition, or parametrize "Larry should not hear a shout". Let’s modify the existing step definition
[Then("{word} should not hear a shout")]
public void ThenPersonShouldNotHearAShout(string name)
{
Assert.Equal(0, people[name].GetMessagesHeard().Count);
}
OK, so we have a failing acceptance test. Let’s dive down into our solution and implement this new rule. It seems like the Network should be responsible for implementing this rule, so let’s go to its unit tests. As we explain in more detail in the next lesson, we start by adding a new unit test to specify this extra responsibility.
We’ll create a 181-character message like this and then assert that the message should not be heard when it’s broadcast.
[Fact]
public void Does_not_broadcast_a_message_over_180_characters_even_if_listener_is_in_range()
{
int seanLocation = 0;
var longMessage = new string('x', 181);
Person laura = new Person(network, 0);
network.Broadcast(longMessage, seanLocation);
Assert.DoesNotContain(longMessage, laura.GetMessagesHeard());
}
Let’s run that test. Good, it fails. Laura’s still getting the message at the moment. Now how are we going to implement this?
It looks like we’re already implementing the proximity rule here in the broadcast method. Let’s add another if statement here about the message length.
if (Math.Abs(listener.Location - shouterLocation) <= range)
if (message.Length <= 180)
listener.Hear(message);
Run the unit test again… and it’s passing. Great.
The code here has got a little bit messy and hard to read. One very basic move we could make to improve it would be to just extract a couple of temporary variables, one for the range rule and one for the length rule.
var withinRange = Math.Abs(listener.Location - shouterLocation) <= range;
var shortEnough = message.Length <= 180;
if (withinRange && shortEnough)
listener.Hear(message);
That’s better. This code could be improved even further of course, but at least we haven’t made it any worse.
Let’s just run the tests to check. Great - everything’s still green.
As now we have everything passing again, we can tidy up the Gherkin to use a new piece of syntax we’ve been wanting to tell you about: a DocString.
DocStrings allow you to specify a text argument for a step that spans over multiple lines. We could change our step to look like this instead:
Scenario: Message is too long
Given a person named Sean
And a person named Lucy
When Sean shouts the following message
"""
This is a really long message
so long in fact that I am not going to
be allowed to send it, at least if I keep
typing like this until the length is over
the limit of 180 characters.
"""
Then Lucy should not hear a shout
Now the scenario is much more readable.
We have to add a new step definition too . It doesn’t need a parameter in the Cucumber Expression — the DocString gets passed as a string argument to the step definition automatically.
Now we can fill out the rest of our new step definition.
[When("Sean shouts the following message")]
public void WhenSeanShoutsTheFollowingMessage(string message)
{
people["Sean"].Shout(message);
messageFromSean = message;
}
Let’s check that we’re still green — and we are!
These two step definitions do the same thing and they even have the same parameters. So we can keep just one of them and add the When attribute of the other to it.
[When("Sean shouts {string}")]
[When("Sean shouts the following message")]
public void WhenSeanShoutsAMessage(string message)
{
people["Sean"].Shout(message);
messageFromSean = message;
}
Having multiple Given, When or Then attributes on the same method is another way to handle alternates.
We don’t use DocStrings very often - having such a lot of data in a test can often make it quite brittle. But when you do need it, it’s useful to know about.
5.5.1. Lesson 5 - Questions (SpecFlow)
We start implementing the maximum message length rule by writing a scenario and seeing it fail. What did we do next?
-
Write another scenario to triangulate the new behaviour of the Network class
-
Implement the changed behaviour in the Network class
-
Add a new unit test to NetworkTest that specifies the change in behaviour of the Network class — TRUE
Explanation: We wrote a new unit test in NetworkTest. We’ll talk more about this in the next lesson.
Why would we use a DocString?
-
It’s the only way to include multi-line strings in a scenarios
-
It’s a readable way to include long strings in a scenario — TRUE
-
DocStrings support multiple languages
-
SpecFlow provides a DocString type that provides useful string manipulation features
Explanation: The DocString is Gherkin syntax that allows long strings to be readably represented in a scenario.
All the magic happens when the DocString is read from the Gherkin. The content of the DocString gets passed to the step definition as a normal string — there’s no corresponding SpecFlow type.
Which of the following snippets of code are correct for the step below?
Then Simone says """ Now on that limb there was a branch A rare branch and a rattlin' branch And the branch on the limb And the limb on the tree And the tree in the bog And the bog down in the valley-o """
-
[Then("Simone says")] public void SimoneSays() { }
-
[Then("Simone says")] public void SimoneSays(string lyrics) { } — TRUE
-
[Then("Simone says {string}")] public void SimoneSays(string lyrics) { }
-
[Then("Simone says {docstring}")] public void SimoneSays(string lyrics) { }
-
[Then("Simone says {docstring}")] public void SimoneSays(DocString lyrics) { }
Explanation: When using a DocString in a scenario, you do not add any parameter to the matching Cucumber Expression. You do need to provide a string parameter to the step definition to receive the contents of the DocString.
5.6. TDD Loops
You might have noticed that we’ve followed a pattern when we added behaviour to the system during this episode.
First we expressed the behaviour we wanted in a Gherkin scenario, wired up the step definitions, then ran SpecFlow to watch it fail.
Then, we found the first class in our domain model that needed to change in order to support that new behaviour. In this case, the Network class. We used a unit test to describe how we wanted instances of that class to behave. Then we ran the unit test and watched it fail.
We focused in and made changes to the class until its unit tests were passing. When the unit tests were passing, we then made some minor changes to clean up the code and make it more readable. This is the basic test-driven-development cycle: red, green, clean.
The technical name for this last clean-up step is refactoring. Refactoring is an ugly name for an extremely valuable activity: improving the design of existing code without changing its behaviour. You can think about it like cleaning up and washing the dishes after you’ve prepared a meal: basic housekeeping. But imagine the state of your kitchen if you never made time to do the dishes.
Go on, imagine it for a second.
Yuck!
Well, that’s how many, many codebase end up. The good thing about taking this course is that we’re teaching you how to write solid automated tests, and the good thing about having solid automated tests is that you can refactor with confidence, knowing that if you accidentally change the system’s behaviour, your tests will tell you.
Once we’re done refactoring, what do we do next? Run SpecFlow, of course! In this case, our scenario was passing with a single trip round the inner TDD loop, but sometimes you can spend several hours working through all the unit tests you need to get a single scenario to green.
Once the acceptance test is passing, we figure out the next most valuable scenario on our todo list, and start the whole thing all over again!
Together, these two loops make the BDD cycle. The outer loop, which starts with an acceptance test, keeps us focussed on what the business needs us to do next. The inner loop, where we continuously test, implement then refactor small units of code, is where we decide how we’ll implement that behaviour.
Both of these levels of feedback are important. It’s sometimes said that your acceptance tests ensure you’re building the right thing, and your unit tests ensure you’re building the thing right.
That’s all for this chapter. See you next time!
5.6.1. Lesson 6 - Questions
Which of the following is the best definition of the term "refactoring"?
-
Improving the efficiency of the code without changing its behaviour
-
Adding new functionality to the application
-
Changing the behaviour of the code
-
Tidying up the code without changing its behaviour — TRUE
-
Rearchitecting the code to get ready for adding new functionality
Explanation: The definition of refactoring is: improve the design (of some code) without changing its behaviour.
When can refactoring happen?
-
When a refactoring story gets prioritised by the Product Owner
-
Whenever all tests are green — TRUE
-
When at least one test is failing
-
First thing in the morning
-
Before committing code to source control
-
At the end of an iteration
Explanation: Refactoring is part of the day-to-day work of every software developer. It’s when they tidy up the code once they’ve got it working.
Since part of the definition of refactoring is that it shouldn’t change the behaviour of the code, they will run the tests to make sure nothing was broken. Which means that the tests MUST be passing BEFORE they start refactoring. Otherwise, how can they be sure that the behaviour hasn’t changed?
How are the acceptance test and unit test related?
-
Acceptance tests ensure we "build the right thing"; unit tests ensure we "build the thing right" — TRUE
-
An acceptance test should be passing before we start writing unit tests
-
We may write many unit tests before the currently failing acceptance test passes — TRUE
-
All unit tests should be passing before we write an acceptance test
-
BDD consists of two loops: an outer acceptance test loop and an inner unit test loop — TRUE
Explanation: Once we have a scenario, we automate it — and we expect it to fail, because we haven’t added the functionality it specifies to the system yet. This is the beginning of the outer, acceptance test loop, that ensures we’re building what the Product Owner wants: "build the right thing."
We then enter the inner, unit test loop. It’s unit tests that define the precise behaviour of small units of code — and ensure that we "build the thing right." They give us the safety to improve the code’s design (refactor), because they will fail if we accidentally change the code’s behaviour while refactoring. We may have to go round the inner loop a number of times, adding several unit tests, before we’ve added enought functionality to make the outer acceptance test pass.
And then we write the next failing acceptance test…
6. Working with Cucumber
6.1. Basic Filtering
Hello, and welcome back to Cucumber School.
Last time we learned about two very different kinds of loops. First, we used DataTables to loop over data in your scenarios.
Then we learned about BDD cycles. We saw how the outer loop of BDD helps you to build the right thing while the inner loop helps you build the thing right.
In this lesson, we’re going to teach you all about how to run different SpecFlow scenarios.
When we start working on a new scenario we often take a dive down to the inner TDD loop where we use a unit testing tool to drive out new classes or modify the behaviour of existing ones. When our unit tests are green and the new code is implemented we return to the SpecFlow scenarios to verify whether we have made overall progress or not.
If we have lots of SpecFlow scenarios, it can be distracting to run all of them each time we do this. We often want to focus on a single scenario - or perhaps just a couple - to get feedback on what we’re currently working on.
There are several ways to do this. SpecFlow converts the scenarios to tests that can be executed by the test execution framework we configured for the project, which is xUnit in our case. Because of that the SpecFlow scenarios appear as regular coded tests in the test runner tools you use, for example in the Visual Studio Test Explorer window. These tools usually provide several filtering options.
Probably the easiest way to filter is to run only the scenario with a specified name.
Simply typing the name of the scenario to the Test Explorer window search box filters the list to that particular scenario. You can run or debug the selected scenario from the context menu but the easiest is to hit Run All that runs all scenarios in the filtered view. It is worth spending a few minutes learning the keyboard shortcut of that, Ctrl R,V by default, as it can speed up your loops drastically.
You can filter for the scenario name as you have seen but you can use the search box to filter for keywords as well. Let’s use it to run all scenarios with the text "range" in their name.
The Test Explorer window can also be used to filter for the outcome, for example if you want to re-run all failing tests, or you can also run tests based on their hierarchy. Since the feature files appear as a separate node in the hierarchy, you can use this to run all scenarios from a particular feature file.
The search box can contain complex search expressions. You can discover these options by clicking on the Add search filter links or by checking out the documentation. We’ll use this to show you how to filter using tags.
Let’s say we want to work on this scenario for the next couple of hours. First, we’ll put a focus tag right here, above this scenario. Tags start with an at-sign and are case sensitive.
As we said, SpecFlow converts the scenarios to tests. This conversion happens at compile-time, so I need to build my solution to be able to have the changes applied.
SpecFlow converts the tags to test categories or traits as the Test Explorer window calls these. So to be able to filter for a particular tag, we have to enter a trait expression.
Trait expressions start with the word Trait followed by a colon and the name of the tag without the at sign.
Trait:focus
Now we can run only the scenarios tagged with focus…
It’s entirely up to you what you name your tags. When we’re working on a particular area of the application it is common to use a temporary tag like this - we’ll remove it before we check our code into source control.
Tags can be used for other purposes as well. If you have lots of scenarios it can be time-consuming to run them all every time. For example, you can tag a few of them with @smoke and run only those before you check in code to source control. Running just the smoke tests will give you a certain level of confidence that nothing is broken without having to run them all.
Trait:smoke
We filtered the Test Explorer window but as you can see no scenarios are shown. This is because the list haven’t refreshed yet. In fact we haven’t even saved the file yet! We could save the file, build the project and run the tests, but the Run All command does all of these steps.
Here it is! Now you probably understand better why learning the keyboard shortcut for this command helps a lot. Ctrl R,V!
Running the smoke tests gives you a quick feedback. If you’re running SpecFlow on a Continuous Integration Server as well, you could run all the scenarios there, detecting any regressions you might have missed by only running the smoke tests.
Tags give you a way to organize your scenarios that cut across feature files. You can think of them like sticky labels you might put into a book to mark interesting pages that you want to refer back to.
Some teams also use tags to reference external documents, for example, tickets in an issue tracker or planning tool. Let’s pretend we are using an issue tracker while working on Shouty and all the behaviour we built so far is related to the issue number 15. We could tag the whole feature file with this single line at the top. All the scenarios within that file now inherit that tag, so if we filter for this tag, Visual Studio will run all the scenarios in the feature file.
You can use more complex tag expressions to select the scenarios you want to run. For example, you could use a trait expression to exclude all the scenarios tagged as @slow. Let’s mark a few of them as slow… and this time I build the project to show the filtering results. Then rewrite the trait expression in the search box to filter for the slow scenarios. Now if I add a dash ("-") to the front you can see all that are not slow. Now when we run the tests, the "@slow" scenarios won’t be run.
-Trait:slow
You can read about how to build more complicated filter expressions in Visual Studio in the Visual Studio documentation
There’s one more thing to learn about tags. They can be combined with hooks, so that you can be selective about which hooks to run when. We’ll cover that in a future chapter.
6.1.1. Lesson 1 - Questions
Which of the filter expressions below would cause the scenario "Two" to be included in a Visual Studio Test Explorer run based on this feature file (steps omitted): MULTIPLE_CHOICE
@MVP Feature: My feature
Rule: rule A Scenario: One
@smoke @slow @regression-pack Scenario: Two
@regression-pack @pricing Scenario: Three
-
Two ----TRUE
-
Scenario: Two ----FALSE
-
Trait:regression-pack ----TRUE
-
Trait:MVP ----TRUE
-
-Trait:pricing ----TRUE
-
Trait:@smoke ----FALSE
Explanation: Tags are inherited from the enclosing scope, so a Scenario inherits tags from the Feature. At present Rules cannot be tagged, although we expect this to be fixed in the near future, at which point tags will be inherited like this: Feature→Rule→Scenario.
Tags can be on the same line and on consecutive lines.
The filter expressions can contain the scenario name (without the 'Scenario' keyword) or a trait expression (Trait:smoke) that can filter for tags (without the '@' character). For excluding scenarios tagged with a specific tag, the trait expression has to be prefixed with a '-' (-Trait:pricing).
Why is it useful to learn the keyboard shortcut of the "Run All" command of Visual Studio? MULTIPLE_CHOICE
-
Because the command can be used to quickly re-run the scenarios we want to focus on and configured as a filter expression. ----TRUE
-
Because it runs all tests regardless of the configured filter expression. ----FALSE
-
Because the run all commands saves the modified files, builds the project and runs the tests in one step. ----TRUE
-
Because it increases your chance to win the 'Who knows more Visual Studio shortcuts' competition. ----FALSE
Explanation: Preparing a development environment where you can focus on the current task increases the productivity. Adding a @focus tag to the scenarios you’re working allows you to get fast feedback by re-running just those scenarios. The default shortcut of Visual Studio’s Test Explorer is Ctrl R,V.
These commands ensure that the changes are saved and the necessary projects compiled.
6.2. Running the scenarios from the command line
In the previous lesson, we ran the SpecFlow scenarios in Visual Studio. That gives you the quick feedback that you need during development.
But we also would like to regularly ensure that the changes we made didn’t cause any regressions. For that we can kick off the full test execution on a Continuous Integration Server or from a console on your local machine so we can keep working on other things while it’s finished.
It does not matter whether you need to configure your Continuous Integration build or you want to run the test from a console locally, the .NET command line tools have to be used for this. But fortunately it is pretty easy.
Let’s open a CMD console or a PowerShell window and change the directory to the folder of your project.
Now use the dotnet test
command.
C:\...\Shouty\Shouty.Specs>dotnet test
The dotnet test command is part of the cross-platform .NET framework, so you can do the same on Linux or on macOS as well. And also inside a Docker container of course.
Starting test execution, please wait...
A total of 1 test files matched the specified pattern.
Passed! - Failed: 0, Passed: 5, Skipped: 0, Total: 5, Duration: 99 ms - Shouty.Specs.dll (netcoreapp3.1)
The output is not very verbose but it clearly shows that all of our test were passing, so our changes did not cause any unwanted side-effects. Good to know.
We wanted to perform a full regression test by running the tests from the command line, therefore we did not apply any filters. But sometimes we need to. Maybe we just want to run the smoke tests.
The dotnet test command provides a --filter
option where you can specify a filter expression. In the previous lesson we discussed that SpecFlow converts tags to test categories or traits. In Visual Studio we used a trait expression, but unfortunately the same expression won’t work here. The exact expression syntax that has to be used depends on the test execution framework you use. Since we’re use xUnit, we have to filter using the Category=smoke
expression. In MsTest and NUnit we would have to say TestCategory
instead of Category
, but the expression is otherwise the same. Let’s fix this for xUnit and run the tests.
C:\...\Shouty\Shouty.Specs>dotnet test --filter Category=smoke
Starting test execution, please wait...
A total of 1 test files matched the specified pattern.
Passed! - Failed: 0, Passed: 2, Skipped: 0, Total: 2, Duration: 77 ms - Shouty.Specs.dll (netcoreapp3.1)
As you can see, only two tests ran, so the filtering worked.
You can also use the filter expression to exclude tagged scenarios from the execution. For that you have to use the not equal operator. Let’s exclude the slow tests now.
C:\...\Shouty\Shouty.Specs>dotnet test --filter Category!=slow
Again, with MsTest or NUnit you would need to use TestCategory here. You can compose even more complex filter expressions for dotnet test. It is worth checking out the documentation for details.
Starting test execution, please wait...
A total of 1 test files matched the specified pattern.
Passed! - Failed: 0, Passed: 3, Skipped: 0, Total: 3, Duration: 99 ms - Shouty.Specs.dll (netcoreapp3.1)
Teams often use such filters on the Continuous Integration Server if they write - or we can say formulate - the scenarios some time before the actual implementation work starts. Having these scenarios in source control would cause the build fail although there is nothing wrong with the implementation. We just have tests that are not supposed to pass yet. If the team agree to tag these scenarios, for example with @formulated
than they can easily exclude them on the build server with this expression. .
C:\...\Shouty\Shouty.Specs>dotnet test --filter Category!=formulated
Starting test execution, please wait...
A total of 1 test files matched the specified pattern.
Passed! - Failed: 0, Passed: 5, Skipped: 0, Total: 5, Duration: 99 ms - Shouty.Specs.dll (netcoreapp3.1)
Seeing all these scenarios pass means that we are progressing well. Just don’t forget to remove the @formulated tag once you are done!
In this lesson you’ve learnt how to run scenarios from the command line and how to filter the set of scenarios to run using tags. On the Continuous Integration Server we probably also want to preserve and publish the test results. We’ll cover that in lesson 4.
6.2.1. Lesson 2 - Questions
Which of the commands below would run all scenarios NOT tagged with @slow when executed from a SpecFlow project folder that has been configured to use MsTest?
-
dotnet test --filter Category!=slow
-
dotnet test --filter TestCategory!=slow ----TRUE
-
dotnet test --filter TestCategory!=@slow
-
dotnet test --filter -Trait:slow
Explanation:
SpecFlow converts Gherkin tags into test categories for the configured test execution framework. When using dotnet test
, the filter expression syntax you should use depends on the test execution framework. With MsTest and NUnit, the TestCategory
keyword has to be used, while xUnit uses 'Category'.
For the test category expression the =
and the !=
operators can be used. The tag name has to be supplied without the @
character.
The trait expressions used in Visual Studio Test Explorer cannot be used for dotnet test
.
6.3. More Control
SpecFlow is first and foremost a tool that facilitates a common understanding between people on a project. To be able to express the domain requirements of projects implemented specifically for a non-English speaking local market, translating everything to English might be a barrier. In this case it is better to describe the requirements in the spoken language the stakeholders speak.
SpecFlow supports over 70 different languages, thanks to contributions from people from all over the world.
To see the translation of the Gherkin keywords for a particular language, check out the Gherkin syntax documentation. For example, this is how the keywords translate to Hungarian.
Let’s see how someone would create a feature file written in a different spoken language. I will do this for the Hungarian language, but you can choose another language you speak.
For that we can add a new feature file first… and make it empty.
The first line tells SpecFlow which language the feature file is written in.
#language: hu-HU
Now we can start writing our feature file. Remember, it should start with the feature keyword. The auto-complete feature of the Visual Studio integration is for your help. It shows which keyword I should use.
#language: hu-HU
Jellemző: Kiáltás meghallása
We can keep editing the file by adding a scenario. In the selected language, of course…
#language: hu-HU
Jellemző: Kiáltás meghallása
Forgatókönyv: A hallható meghall egy üzenetet
Adott egy ember Sean
És egy ember Lucy
Ha Sean azt kiáltja, hogy "Ingyen bagel a Sean's-nál"
Akkor Lucynak meg kell hallania Sean üzenetét
This is Cucumber School and not a Hungarian lesson, but you might have guessed that my scenario tells the same story that we described in the "Listener hears a message" scenario earlier. But if we look at this new scenario we can see that Visual Studio reports the steps as undefined. This is because we have used the English step texts in our Cucumber Expressions. In order to make this pass, we would need to add step definitions with Hungarian texts in the expressions or add an additional Given, When or Then attribute to our step definition methods with the Hungarian text, like this.
[Given("egy ember {word}")]
[Given("a person named {word}")]
public void GivenAPersonNamed(string name)
{
people.Add(name, new Person(network, 0));
}
If we build the project, the first two steps appear to be defined now.
We can fix the other two step definitions as well… and we have our first Hungarian shouty scenario pass!
[When("Sean azt kiáltja, hogy {string}")]
[When("Sean shouts {string}")]
[When("Sean shouts the following message")]
public void WhenSeanShoutsAMessage(string message)
{
people["Sean"].Shout(message);
messageFromSean = message;
}
[Then("Lucynak meg kell hallania Sean üzenetét")]
[Then("Lucy should hear Sean's message")]
public void ThenLucyShouldHearSeansMessage()
{
Assert.Contains(messageFromSean, people["Lucy"].GetMessagesHeard());
}
Usually the feature file languages are not mixed within a project. If you don’t want to add the same language header line to all of the feature files, you can also change the default feature file language in SpecFlow.
For that we need to add a SpecFlow configuration file to our project. The SpecFlow configuration file is a simple JSON file, named specflow.json
. . This file needs to be copied to the output directory.
As there is no auto-complete support for this file, it’s easiest to check out the SpecFlow documentation. The first example here does exactly what we want, so we can just copy and paste it into our project, and change the language to the one we have chosen.
{
"language": {
"feature": "hu-HU"
}
}
When we build the project, we got lots of errors from our English feature file. This is because we changed the default and this file does not have a language setting at the top. But the new file works without the language setting now…
In Chapter 3 we talked about step definition parameters and that SpecFlow helps you to convert them to a .NET data type. SpecFlow performs these conversions using the feature file language. So for example if you include a fractional number that needs to be converted to double, SpecFlow will expect the decimal separator character used in the language of your feature file. In the Hungarian language for example, a comma is used as the decimal separator instead of a dot. This is also important for date conversions.
This default behavior can be changed by explicitly declaring what culture setting SpecFlow should use for conversions. For example we can force the conversions to be made using US English culture even in Hungarian feature files.
{
"language": {
"feature": "hu-HU"
},
"bindingCulture": {
"name": "en-US"
}
}
There are a few other things you can configure that you can find in the documentation. It’s worth highlighting the one that can instruct SpecFlow to load step definitions from other external projects as well. This is useful in bigger applications with multiple SpecFlow projects.
For the rest of this chapter, let’s remove the non-English feature file and reset the configuration by removing the specflow.json
file.
That’s quite a lot to digest, but to make SpecFlow really useful to your team, it’s good to spend some time learning the details of how to configure it. In this lesson, we showcased the SpecFlow configuration options and you learned how to write your scenarios in different spoken languages.
6.3.1. Lesson 3 - Questions
Which of the following first lines changes the language of a feature file?
-
# language: hu-HU ----TRUE
-
! language: hu-HU
-
language: hu-HU
-
# i18n: hu-HU
Explanation:
Gherkin supports lots of languages on a per feature file basis. It has to be the first line in the feature file, and has to be a comment with the content language: <language_identifier>
How can you configure the behavior of SpecFlow?
-
By adding a file 'specflow.json' to the project root and enable "Copy to output folder" ----TRUE
-
By adding a file 'config.json' to the project in any folder
-
By adding a file 'specflow.yaml' to the project root and enable "Copy to output folder"
-
By adding a file 'specflow.json' to the project in any folder
Explanation: SpecFlow configuration settings can be provided by adding a JSON file named 'specflow.json' to the project root and enabling "Copy to output folder" for the file. A reference guide of the possible settings can be found in the documentation. For backwards compatibility you can also specify the settings in an XML configuration file or in the App.config file for older projects.
The test execution framework (MsTest, NUnit, etc.) and other plugins can be enabled by adding the appropriate NuGet package (e.g. SpecFlow.MsTest) to the project references.
6.4. Dealing with execution results
In the previous lessons we’ve learnt how to configure SpecFlow and how to run scenarios from Visual Studio and from the command line. The test output we have seen there contains a brief summary of the execution and the failures. If the test were executed on a Continuous Integration Server we probably also want to preserve and publish the test results. This is what we will look at in this lesson.
The dotnet test
tool outputs the execution results through configurable loggers. The logger can be specified using the --logger
option. There are multiple logger providers you can choose from. You can even write your own. But most of the teams use the TRX logger, that saves the execution results into a Visual Studio Test Results or TRX file. Let’s use this.
C:\...\Shouty\Shouty.Specs>dotnet test --logger trx
Starting test execution, please wait...
A total of 1 test files matched the specified pattern.
Results File: W:\CucumberSchool\bdd-with-cucumber-code-dotnet\Shouty\Shouty.Specs\TestResults\gaspar_SPECSOL-D01_2021-01-22_15_46_29.trx
Passed! - Failed: 0, Passed: 5, Skipped: 0, Total: 5, Duration: 137 ms - Shouty.Specs.dll (netcoreapp3.1)
The execution finished and reported that the results have been saved to a file that has the current timestamp inside the TestResults folder of the project. Let’s look at this file.
Visual Studio has been associated to this file extension, so if I open this file from file explorer , it opens it in Visual Studio. We can see all the individual test executions and even the details of each of them where we see the step execution results.
The TRX files are XML files so it is easy to make any tools that processes the results and converts them to another format. It is, for example, a common expectation to present the execution results as an HTML file that can be checked by all stakeholders even if they don’t have Visual Studio.
There are many small open source tools that can do such a conversion. For example the fork of the Trxer project by @stevencohn on GitHub is a nice one that provides some dedicated styling for SpecFlow results.
This one is a .NET assembly that I downloaded earlier, so now I can invoke it with the dotnet command we have seen already.
C:\...\Shouty\Shouty.Specs>dotnet <TrxReporter-path>\TrxReporter\TrxReporter.dll --input TestResults\gaspar_SPECSOL-D01_2021-01-22_15_46_29.trx
The result is a single HTML file that I could publish to my team workspace or just open locally. It shows the execution summary as well as the individual scenario results grouped by features.
The TRX file format is also understood by most of the Continuous Integration platforms as well, so they can show the details included in the build results or even highlight whether the same tests were failing for the first time.
Processing the TRX file on the build server is not that easy with non-deterministic file names. In this case it is more useful to specify the output file name for the TRX logger. This can be done by specifying the logfilename
parameter of the TRX logger like this.
C:\...\Shouty\Shouty.Specs>dotnet test --logger trx;logfilename=shouty-result.trx
The SpecFlow+ LivingDoc Generator from the Tricentis SpecFlow team uses a slightly different approach. This free tool does not process the TRX file, but produces its own result file and from that it is able to generate comprehensive documentation of your requirements based on the feature files and the scenarios you defined. Although the generated documentation also contains the test results, its primary focus is to produce the so-called Living Documentation — documentation that shows whether the application currently fulfills the requirements.
In order to setup SpecFlow+ LivingDoc Generator, we need to install the SpecFlow.Plus.LivingDocPlugin
NuGet package.
C:\...\Shouty\Shouty.Specs>dotnet add package SpecFlow.Plus.LivingDocPlugin
Now let’s run the tests. We don’t need the TRX file now, so running dotnet test
is enough. The SpecFlow+ LivingDoc plugin generates a result file in the output folder, named TestExecution.json
by default.
C:\...\Shouty\Shouty.Specs>dir bin\Debug\netcoreapp3.1
Here it is!
Now we need to run the actual generator. The generator has to be installed as a .NET tool for the project.
Since this is the first .NET tool that we have install for this project, we need to first initialize the .NET tool configuration. For that we need to go up to the solution folder in our console window and invoke dotnet new tool-manifest
. This creates a file named dotnet-tools.json
in the .config
folder of the solution. You should add this file to your source control.
C:\...\Shouty\Shouty.Specs>cd ..
C:\...\Shouty>dotnet new tool-manifest
Getting ready...
The template "Dotnet local tool manifest file" was created successfully.
Now we can install the SpecFlow+ LivingDoc Generator. This can be done, using the dotnet tool install
command that pulls the generator from NuGet.org and installs it for this project.
C:\...\Shouty>dotnet tool install SpecFlow.Plus.LivingDoc.CLI
You can invoke the tool from this directory using the following commands: 'dotnet tool run livingdoc' or 'dotnet livingdoc'.
Tool 'specflow.plus.livingdoc.cli' (version '3.5.286') was successfully installed. Entry is added to the manifest file W:\CucumberSchool\bdd-with-cucumber-code-dotnet\Shouty\.config\dotnet-tools.json.
As the message says, the tool has been installed successfully and can be used with the dotnet livingdoc
command.
First, let’s step back to our SpecFlow project folder and start with dotnet livingdoc
. Now we need to specify a couple of settings for the generator. If you get lost, you can always call dotnet livingdoc help or check the documentation.
We need to say test-assembly
, because we want to generate the result from an assembly, and we have to specify the compiled assembly of our Shouty project.
Finally we need to provide the path of the generated TestExecution.json file using the --test-execution-json
option. And run.
C:\...\Shouty\Shouty.Specs>dotnet livingdoc test-assembly bin\Debug\netcoreapp3.1\Shouty.Specs.dll --test-execution-json bin\Debug\netcoreapp3.1\TestExecution.json
W:\CucumberSchool\bdd-with-cucumber-code-dotnet\Shouty\Shouty.Specs\LivingDoc.html was successfully generated.
We’ve got the generated HTML file. As you can see, the result focuses on the feature structure, but the details are also displayed.
Since the result is a single HTML file, it is easy to share it with all stakeholders. If you use Azure DevOps to track your work items, you can use the integrated version of SpecFlow+ LivingDoc as well.
There are more and more tools that can help you to implement living documentation for SpecFlow, like Pickles Doc, Augurk or SpecSync for Azure DevOps. The SmartBear Cucumber team is working on a platform called Cucumber Reports that can be used to publish and share living documentation. At the time of the recording this is not yet available for SpecFlow, but hopefully it will be soon.
In this lesson we have enumerated a couple of options to share your results and your living documentation with the entire team. This is essential in order to facilitate a common understanding between people on the project — which is our ultimate goal.
With that we finished this chapter. See you next time!
6.4.1. Lesson 4 - Questions
Which of the commands below would run all scenarios and save the execution results to a file 'result.trx'?
-
dotnet test --logger trx
-
dotnet test --out result.trx
-
dotnet test --logger trx;logfilename=result.trx ----TRUE
-
TRX file format is not supported by
dotnet test
Explanation:
The test output of the dotnet test
command can be configured by specifying a logger. The trx logger can be used for saving the results to a TRX file. By default it saves the result to a file name containing the current timestamp. To use a deterministic filename, the logfilname
parameter has to be provided.
Why is it valuable to generate reports about the SpecFlow test execution results? MULTIPLE-CHOICE
-
The readable report can help the business stakeholders follow the progress of the implementation and therefore increases trust. ----TRUE
-
Without HTML reports, the test execution results cannot be checked.
-
It is easier to assess the impact of a test failure when it is connected to the business expectations. ----TRUE
-
It is easier to quickly react to failures coming from the Continuous Integration Server (CI) when you don’t need special tools, like Visual Studio to view the results. ----TRUE
-
The reports are not directly valuable, they are just required to fulfill the policies of the development process.
7. Details
In the last lesson we took a break from the code to sharpen up your skills with Cucumber’s command-line interface.
Now it’s time to dive right back into the code. We’re going to explore one of the hottest topics that teams come across when they start to get to grips with Cucumber and BDD: how much detail to use in your scenarios.
Many teams find they can’t easily agree on this. It can often seem like a matter of personal preference. It’s true there are no right and wrong answers, but we’re going to teach you some heuristics you can apply to help you make better decisions.
8. Example Mapping
In the last lesson we saw how easily incidental details can creep into your scenarios, talked about why they’re a problem, and showed you some techniques for massaging them back out again. But, as we pushed the details out of our scenarios, we made the step definition code more complicated. We promised to show you how to deal with that extra complexity, and we’re going to get to that in the next chapter, Chapter 9.
First though, we want to look at how we could have prevented the Premium Accounts feature from getting into such a mess in the first place.
We’re going to learn about a practice called Example Mapping, a way to structure the conversation between the Three Amigos - Tester, Developer and Product Owner - to develop shared understanding before you write any code.
8.1. Example Mapping: Why?
In terms of the three practices we introduced in Chapter 1 - Discovery, Formulation and Automation - what went wrong with the Premium Accounts feature?
Thinking about it, we can see that the development team jumped straight into Automation - writing the implementation of the feature. They did the bare minimum of Formulation - just enough to automate a test for the feature, but really we did a lot of the Formulation later on as we cleaned it up. Finally, much of the Discovery happened at the end once Tamsin had a chance to test the feature by hand.
So in essence, they did everything backwards.
In software projects, it’s often the unknown unknowns that can make the biggest difference between success and failure. In BDD, we always try to assume we’re ignorant of some important information and try to deliberately discover those unknown unknowns as early as possible, so they don’t surprise us later on.
A team that invests just a little bit extra in Discovery, before they write any code, saves themselves a huge amount of wasted time further down the line.
In lesson 1, we showed you an example of the Three Amigos - Tester, Developer and Product owner - having a conversation about a new user story.
Nobody likes long meetings, so we’ve developed a simple framework for this conversation that keeps it quick and effective. We call this Example Mapping.
An Example Mapping session takes a single User Story as input, and aims to produce four outputs:
-
Business Rules, that must be satisified for the story to be implemented correctly
-
Examples of those business rules playing out in real-world scenarios
-
Questions or Assumptions that the group of people in the conversation need to deal with soon, but cannot resolve during the immediate conversation
-
New User Stories sliced out from the one being discussed in order to simplify it.
We capture these, as we talk, using index cards, or another virtual equivalent.
Working with these simple artefacts rather than trying to go straight to formulating Gherkin, allows us to keep the conversation at the right level - seeing the whole picture of the user story without getting lost in details.
8.1.1. Lesson 1 - Questions
Why did we say the development team’s initial attempt at the premium accounts feature was "done backwards"?
-
They did Discovery before Automation
-
They did Discovery before Formulation
-
They started with Automation, without doing enough Discovery or Formulation first (Correct)
-
They started with Discovery, then did Formulation and finally Automation
Explanation:
The intended order is Discovery, Formulation then Automation. Each of these steps teaches us a little more about the problem.
Our observation was that the the team jumped straight into coding (Automation), retro-fitting a scenario later. The discovery only happened when Tamsin tested the feature.
What does "Deliberate Discovery" mean (Multiple choice)
-
One person is responsible for gathering the requirements
-
Discovery is something you can only do in collaboration with others
-
Having the humility to assume there are things you don’t yet understand about the problem you’re working on (Correct)
-
Embracing your ignorance about what you’re building (Correct)
-
There are no unknown unknowns on your project
Explanation:
Deliverate Discovery means we assume that there are important things we don’t yet know about the project we’re working on, and so make a deliberate effort to look for them at every opportunity.
Although we very much encouage doing that collaboratively, it’s not the main emphasis here.
Read Daniel Terhorst-North’s original blog post.
Why is it a good idea to try and slice a user story?
-
Working in smaller pieces allows us to iterate and get feedback faster (Correct)
-
We can defer behaviour that’s lower priority (Correct)
-
Smaller stories are less likely to contain unknown unknowns (Correct)
-
Doing TDD and refactoring becomes much easier when we proceed in small steps (Correct)
-
Small steps help us keep momentum, which is motivating (Correct)
Explanation:
Just like grains of sand flow through the neck of a bottle faster than pebbles, the smaller you can slice your stories, the faster they will flow through your development team.
It’s important to preserve stories as a vertical slice right through your application, that changes the behaviour of the system from the point of view of a user, even in a very simple way.
That’s why we call it slicing rather than splitting.
Why did we discourage doing Formulation as part of an Example Mapping conversation?
-
Trying to write Gherkin slows the conversation down, which means you might miss the bigger picture. (Correct)
-
It’s usually an unneccesary level of detail to go into when you’re trying to discover unknown unknowns. (Correct)
-
Formulation should be done by a separate team
-
One person should be in charge of the documentation
Explanation
This is why we’ve separated Discovery from Formulation. It’s better to stay relatively shallow and go for breadth at this stage - making sure you’ve looked over the entire user story without getting pulled into rabbit holes.
Product Owners and Domain Experts are often busy people who only have limited time with the team. Make the most of this time by keeping the conversation at the level where the team can learn the maximum amount from them.
8.2. Example Mapping: How?
We first developed example mapping in face-to-face meeting using a simple a multi-colour pack of index cards and some pens. For teams that are working remotely, there are many virtual equivalents of that nowadays.
We use the four different coloured cards to represent the four main kinds of information that arise in the conversation.
We can start with the easy stuff: Take a yellow card and write down the name of the story.
Now, do we already know any rules or acceptance criteria about this story?
Write each rule down on a blue card:
They look pretty straightforward, but let’s explore them a bit by coming up with some examples.
Darren the developer comes up with a simple scenario to check he understands the basics of the “buy” rule: "I start with 10 credits, I shout buy my muffins and then I want to buy some socks, then I have zero credits. Correct?"
"Yes", says Paula.
Darren writes this example up on a green card, and places it underneath the rule that it illustrates.
Tammy the tester chimes in: "How about the one where you shout a word that contains buy, like buyer for example? If you were to shout I need a buyer for my house. Would that lose credits too?"
Paula thinks about it for a minute, and decides that no, only the whole word buy counts. They’ve discovered a new rule! They write that up on the rule card, and place the example card underneath it.
Darren asks: "How do the users get these credits? Are we building that functionality as part of this story too?"
Paula tells him that’s part of another story, and they can assume the user can already purchase credits. They write that down as a rule too.
This isn’t a behaviour rule - it’s a rule about the scope of the story. It’s still useful to write it down since we’ve agreed on it. But it won’t need any examples. We could also have chosen to use a red card her to write down our assumption.
Still focussed on the “buy” rule, Tammy asks: "What if they run out of credit? Say you start with 10 credits and shout buy three times. What’s the outcome?"
Paula looks puzzled. "I don’t know". She says. I’ll need to give that some thought.
Darren takes a red card and writes this up as a question.
They apply the same technique to the other rule about long messages, and pretty soon the table is covered in cards, reflecting the rules, examples and questions that have come up in their conversation. Now they have a picture in front of them that reflects back what they know, and still don’t know, about this story.
8.2.1. Lesson 2 - Questions
What do the Green cards represent in an example map?
-
Stories
-
Rules
-
Examples (Correct)
-
Questions or assumptions
Explanation:
We use the green card to represent examples because when we turn them into tests we want them to go green and pass!
What do the Blue cards represent in an example map?
-
Stories
-
Rules (Correct)
-
Examples
-
Questions or assumptions
Explanation:
We use the blue cards to represent rules because they’re fixed, or frozen, like blue ice.
What do the Red cards represent in an example map?
-
Stories
-
Rules
-
Examples
-
Questions or assumptions (Correct)
Explanation:
We use the red cards to represent questions and assumptions because it indicates danger! There’s still some uncertainty to be resolved here.
What do the Yellow cards represent in an example map?
-
Story (Correct)
-
Rule
-
Example
-
Question or assumption
Explanation:
We chose the yellow cards to represent stories in our example mapping sessions, mostly because that was the last colour left over in the pack!
Look at the following example map. Do you think the team is ready to start coding yet?
-
No. There are still a lot of questions to resolve.
-
No. They probably haven’t explored the story enough yet. More conversation needed. (Correct)
-
No. There are too many rules. They should try to slice the story first.
-
Yes. There’s a good number of examples for each rule, and no questions.
Explanation:
When an example map shows only a few cards, and some rules with no examples at all, it suggests that either the story is very simple, or the discussion hasn’t gone deep enough yet.
Look at the following example map. Do you think the team is ready to start coding yet?
-
No. There are still a lot of questions to resolve.
-
No. They probably haven’t explored the story enough yet. More conversation needed.
-
No. There are too many rules. They should try to slice the story first.
-
Yes. There’s a good number of examples for each rule, and no questions. (Correct)
Explanation:
This example map shows a good number of examples for each rule, and no questions. If the team feel like the conversation is finished, then they’re probably ready to start hacking on this story.
Look at the following example map. Do you think the team is ready to start coding yet?
-
No. There are still a lot of questions to resolve. (Correct)
-
No. They probably haven’t explored the story enough yet. More conversation needed.
-
No. There are too many rules. They should try to slice the story first.
-
Yes. There’s a good number of examples for each rule, and no questions.
Explanation:
The large number of red cards here suggests that the team have encountered a number of questions that they couldn’t resolve themselves. Often this is an indication that there’s someone missing from the conversation. It would probably be irresponsible to start coding until at least some of those questions have been resolved.
Look at the following example map. Do you think the team is ready to start coding yet?
-
No. There are still a lot of questions to resolve.
-
No. They probably haven’t explored the story enough yet. More conversation needed.
-
No. There are too many rules. They should try to slice the story first. (Correct)
-
Yes. There’s a good number of examples for each rule, and no questions.
Explanation:
When an example map is wide like this, with a lot of different rules, it’s often a signal that there’s an opportunity to slice the story up by de-scoping some of those rules from the first iteration. Even if it’s not something that would be high enough quality to ship to a customer, you can often defer some of the rules into another story that you can implement later.
8.3. Example Mapping: Conclusions
As you’ve just seen, an example mapping session should go right across the breadth of the story, trying to get a complete picture of the behaviour. Inviting all three amigos - product owner, tester and developer - is important because each perspective adds something to the conversation.
Although the apparent purpose of an example mapping session is to take a user story, and try to produce rules and examples that illustrate the behaviour, the underlying goal is to achieve a shared understanding and agreement about the precise scope of a user story. Some people tell us that example mapping has helped to build empathy within their team!
With this goal in mind, make sure the session isn’t just a rubber-stamping exercise, where one person does all the talking. Notice how in our example, everyone in the group was asking questions and writing new cards.
In the conversation, we often end up refining, or even slicing out new user stories to make the current one smaller. Deciding what a story is not - and maximising the amount of work not done - is one of the most useful things you can do in a three amigos session. Small stories are the secret of a successful agile team.
Each time you come up with an example, try to understand what the underlying rule or rules are. If you discover an example that doesn’t fit your rules, you’ll need to reconsider your rules. In this way, the scope of the story is refined by the group.
Although there’s no doubt of the power of examples for exploring and talking through requirements, it’s the rules that will go into the code. If you understand the rules, you’ll be able to build an elegant solution.
As Dr David West says in his excellent book "Object Thinking", If you problem the solution well enough, the solution will take care of itself.
Sometimes, you’ll come across questions that nobody can answer. Instead of getting stuck trying to come up with an answer, just write down the question.
Congratulations! You’ve just turned an unknown unknown into a known unknown. That’s progress.
Many people think they need to produce formal Gherkin scenarios from their three amigos conversations, but in our experience that’s only occasionally necessary. In fact, it can often slow the discussion down.
The point of an example mapping session is to do the discovery work. You can do formulation as a separate activity, next.
One last tip is to run your example mapping sessions in a timebox. When you’re practiced at it, you should be able to analyse a story within 25 minutes. If you can’t, it’s either too big, or you don’t understand it well enough yet. Either way, it’s not ready to play.
At the end of the 25 minutes, you can check whether everyone thinks the story is ready to start work on. If lots of questions remain, it would be risky to start work, but people might be comfortable taking on a story with only a few minor questions to clear up. Check this with a quick thumb-vote.
8.3.1. Lesson 3 - Questions
Which of the following are direct outcomes you could expect if your team starts practcing Example Mapping?
-
Less rework due to bugs found in your stories (Correct)
-
Greater empathy and mutual respect between team members (Correct)
-
Amazing Gherkin that reads really well
-
Smaller user stories (Correct)
-
A shared understanding of what you’re going to build for the story (Correct)
-
More predicatable delivery pace (Correct)
-
A quick sense of whether a story is about the right size and ready to start writing code. (Correct)
Explanation:
We don’t write Gherkin during an example mapping session, so that’s not one of the direct outcomes, though a good example mapping session should leave the team ready to write their best Gherkin.
Which of the following presents the most risk to your project?
-
Unknown unknowns (Correct)
-
Known unknowns
-
Known knowns
Explanation:
In project management, there are famously "uknown unknowns", "known unknowns" and "known knowns". The most dangerous are the "unknown unknows" because not only do we not know the answer to them, we have not even realised yet that there’s a question!
9. Support Code
In Chapter 7 we refined the Gherkin of the Premium Accounts feature, turning what had started out as nothing more than an automated test into some valuable documentation.
As we did that, we pushed the "how" down, making the scenarios themselves more declarative of the desired behaviour, pushing the implementation details of the testing into the code in the step defintions below.
In doing this, we got more readable, maintainable and useful scenarios in exchange for more complex automation code. In this chapter we’ll show you how to organise your automation support code so that you won’t be afraid of making this trade-off.
10. Acceptance Tests vs Unit Tests
In the last chapter, we extracted a layer of support code from your step definitions to keep your Cucumber code easy - and cost-effective - to maintain.
We’re going to keep things technical in this chapter. Remember that bug we spotted right back at the beginning of Chapter 7, where the user was over-charged if they mentioned “buy” several times in the same message? It’s finally time to knuckle down and fix it.
As we do so, you’re going to get some more experience of the inner and outer BDD loops that we first introduced you to in Chapter 5. We’ll explore the difference between unit tests and acceptance tests, and learn the value of each.
If you’re someone who doesn’t normally dive deep into code, try not to worry. We think you’ll find it valuable to see how different kinds of tests complement each other in helping you to build a quality product.
11. Epilogue
This concludes our epic journey to get you started using Cucumber as it was intended - a tool to help you and your team decide what to build, build it, and maintain it for years to come.
I’ve been working with these techniques for 20 years now, and I’m still learning new stuff every day. So don’t get disheartened if it seems overwhelming sometimes.
There’s a great supportive community of other practitioners waiting for you in our Community Slack and there are a wealth of great books you can pick up for further study.
There’s John Fergusson Smart’s BDD in Action.
There’s Richard Lawrence and Paul Rayner’s book Behavior-Driven Development with Cucumber
And last but definitely not least, there’s Seb Rose and Gaspar Nagy’s series of three, The BDD Books: Discovery, Formulation and Automation.
If you’re keep to see more courses here on Cucumber School on other topics, or you’d just like to give us some feedback on this course, please come into the Slack and let us know. We’d love to hear from you.