How to Write Acceptance Tests

Sdílet
Vložit

Komentáře • 47

  • @gpzim981
    @gpzim981 Před 3 lety +46

    Imagine the honour of working on a team lead by Dave Farley.

  • @PaulSebastianM
    @PaulSebastianM Před měsícem +1

    Imagine the level of day to day happiness at work when Dave Farley is your team lead or principal engineer.

  • @seanregehr4921
    @seanregehr4921 Před 2 lety +34

    [User Stories] become [Acceptance Tests] which is [Behavior Driven Development] "Doing the RIGHT thing."
    [Code Functionality] becomes [Unit Testing] which is [Test Driven Development] "Doing the THING right."
    In both scenarios, tests are written to be loosely coupled, which enables scaling and future changes, without breaking anything (mostly). The tests only care that "the system works" and not 'how the system works'.

    • @calorus
      @calorus Před 2 lety +2

      ^Excellent comment.

  • @TheJessejunior
    @TheJessejunior Před 3 lety +2

    Very good! simple and efective! thanks again sr!

  • @_djordje
    @_djordje Před 3 lety +2

    Excellent video, thank you.

  • @NilsElHimoud
    @NilsElHimoud Před 3 lety +2

    Thank YOU for sharing this.

  • @ziabasit8745
    @ziabasit8745 Před 3 lety +1

    Thank you for doing this Video!

  • @maximilianosorich554
    @maximilianosorich554 Před rokem +1

    the visual robot example was very nice...

  • @ruixue6955
    @ruixue6955 Před rokem +3

    0:16 *how to write acceptance tests that don't break as your system changes*
    1:00 *acceptance tests are always written from the perspective of external user of the system* - try to separate how the system is working from what we would like to do. focus is on *what*
    3:14 1st thing to do: requirement from customers - User Story: pay for book with a credit card
    3:44 example: somebody buying a book
    4:20 example executable specification in DSL
    4:5
    6:17 next layer: *DSL for testing*
    7:07 example implementation code of DSL
    7:58 3rd layer: *protocol driver*
    8:25 job of the protocol driver:
    9:15 what for a minute what the protocol driver is: the only layer that understands how the system works
    9:28 hidden completely from the test cases
    9:48 bottom: System Under Test
    10:25 example demo
    10:42 protocol driver sample code
    12:40 recap

  • @OthmanAlikhan
    @OthmanAlikhan Před 2 lety

    Thanks for the video =)

  • @ML-hf6ii
    @ML-hf6ii Před 2 lety

    love that channel

  • @random6434
    @random6434 Před 3 lety +2

    So it turns out I've been writing my "unit tests" like this or something like this for years. Normally after writing a few tests I start gathering set up and assertions into some kind of "test environment" class just for DRY purposes.

  • @gonzalowaszczuk638
    @gonzalowaszczuk638 Před 3 lety +3

    How would you handle side-effects and setup, to make sure the tests are reproduceable and independent? For example, who is in charge of making sure the "Continuous Delivery" book exists in the book store before running the acceptance test (because it would fail otherwise)? Who is in charge of resetting the state of the system after the test is finished? The protocol driver?

    • @ContinuousDelivery
      @ContinuousDelivery  Před 3 lety +5

      The test gets the system into the state ready for that test, I use an approach that I call "functional isolation" to mean that the data for each test doesn't leak to other tests, so no need for clean-up, other than dropping the whole system, or DB, at the end.
      I think that the development team owns all this stuff, and it is all completely automated.

    • @gonzalowaszczuk638
      @gonzalowaszczuk638 Před 3 lety +3

      @@ContinuousDelivery That sounds very interesting. Do you have a video/resource on the subject of "functional isolation"?

  • @roman_mf
    @roman_mf Před rokem +1

    Hello Dave, thank you very much for all content that you produce. I like the idea of Channels. I went through the code on GH link that you have in the description, but there a couple of things I don't understand and would greatly appreciate if you can clear these up a little:
    1) You have several acceptance test examples, and each of them seem to duplicate channels in their @Channel annotations. What is the logic behind this?
    2) I see that you have a separate class BookShopDrivers that acts as a registry for all drivers that the bookshop application is using. However, driver() method always returns the first driver from the @Channel's arguments, which I see how it is connected with 1), because each new test adds new channel as the first item on the list, but again, I am trying to understand the logic behind all of this.

    • @ContinuousDelivery
      @ContinuousDelivery  Před rokem +1

      “Channels” may be a misnomer, but I can’t think of a better name. The name made sense when we first came up with the idea. Then they represented “different channels of communication” with a single system, Web, public API, institutional Std API etc. Later, one of my clients used the same idea to represent an Old version of their system and a new, running the same tests against each system.
      Fundamentally, the “channels” are defining which Protocol Driver to choose.
      The example in the code that you point to is a sketch, not a fully working version of the channel idea, I don’t have a publishable version of the custom test-runner that you need to automate the switching of the protocol drivers work, so I didn’t bother finishing this code. It was written originally as an example for some one, so I didn’t need to take it any further, sorry for the confusion.

  • @reverendbluejeans1748

    Is the e2e testing.

  • @donnyroufs551
    @donnyroufs551 Před rokem

    In a standard 2 tier setting (client and server) would you write acceptance tests on both the client and backend or truly do it from the client all the way to the backend(e2e)?

    • @ContinuousDelivery
      @ContinuousDelivery  Před rokem +1

      It depends, is each of the pieces independently deployable. That is can I release each without testing with the other? If not, then yes, I would test them together.
      I define the scope of acceptance testing in my deployment pipeline as being aligned with whatever it is that I will deploy. I want to test everything that I deploy together, together before release, and acceptance testing is the most effective, most efficient way to do that.

    • @donnyroufs551
      @donnyroufs551 Před rokem

      @@ContinuousDelivery Makes sense. How would you deal with replacing out-of-process dependencies? I assume you would need some kind of test client that knows about both the client and server e.g.
      TestClient -> server (responsible for setting state, mocking out-of-process dependencies)
      TestClient -> client
      Tests -> TestClient
      edit:
      Wouldn't your acceptance tests be kind of useless during the TDD cycle since both sides now need to be done for it to actually run? I have always relied on acceptance tests to tell me whether my API does what is expected, but this doesn't work the moment you write it end to end

  • @thigmotrope
    @thigmotrope Před rokem

    So PageObjects are abstraction that lives in the protocol driver layer? If you only have one protocol that you have to support (e.g. web), the argument for the layers seems to weaken. I do think the dsl (bdd with mocha/chai) in the language of the problem domain is helpful the

    • @ContinuousDelivery
      @ContinuousDelivery  Před rokem

      Yes, page drivers are a form of protocol driver. I don't agree that a single protocol weakens the argument, the separation is still good, even if you only have one. I'd argue that it is probably a poor design if you only have one, because a better design would separate the concerns better, but even so, the layering is important. The DSL should be clean of system-implementation-specifcs, the protocol layer is where the translation happens.

  • @MrSkinbad
    @MrSkinbad Před 2 lety +2

    Great content but I don't get the "robot in book store" analogy - it works as long as the UX for e-commerce websites are modelled around how physical stores operate. If your physical bookstore was a "smart" store where you just walked out with the stuff you want and are automatically billed the analogy falls down.
    More than that, I see how its nice to write the acceptance test cases in business language, but aren't you literally aiming to test the UI here? So surely this is where the abstraction should end. e.g. we would have different test cases for a mobile app vs a web app since the UI is surely different in some ways.
    Or is the suggestion actually that we would have abstracted test cases to such an extent that we can use the same cases for multiple UIs? I just wonder how realistic or useful this actually is?

    • @ContinuousDelivery
      @ContinuousDelivery  Před 2 lety +3

      In that case the business case has changed. Of course there are limits to how far you can take this approach, but the point is to model the outcome, and test for that, and that is ALWAYS a more stable, more generic thing than any specific implementation that you choose to deliver the outcome that you were aiming for.

  • @miguelgarciadasilva
    @miguelgarciadasilva Před 2 lety +4

    Great content as always! In my understanding, with this approach we have three layers of testing:
    - Unit testing: for the domain layer (in-process dependencies and mock external dependencies).
    - Integration testing: for the infrastructure layer (every adapter in isolation with technologies like testcontainers or MockServer).
    - Acceptance test: as an end to end testing for the use cases (using all the real collaborators deployed in an environment similar to pro).
    This is the point or I misunderstood something?

    • @ContinuousDelivery
      @ContinuousDelivery  Před 2 lety +4

      I usually say that 2 layers are mandatory, unit & acceptance. The acceptance tests as a kind of super-integration test. I'd say you need these for every change. Integration tests are valuable, but more about the context. They are useful for some kinds of code, and not needed for others, but mostly, yes, I think that is a reasonable summary.

  • @torkleyy9168
    @torkleyy9168 Před 2 lety +4

    Thank you for these excellent videos! I'm not sure I understood how BDD, Continuous Delivery and automated Acceptance Testing fit together, maybe you can clear things up.
    Given that
    work should be integrated regularly and
    acceptance tests should, too, be written before implementing the code
    does the DSL help anybody other than the developer making the change?
    It seems to me such a simplified DSL could be written by a less technical person, but with BDD that is discouraged.
    Now if that's true, it seems to me that you have to either
    1) completely implement your user story in one commit
    2) have your pipeline failing until the written scenarios are fully implemented
    3) collaborate on a branch to implement the story, risking the creation of long-lived feature branches
    It seems like wasted effort to create such a simple natural language DSL if it doesn't promote collaboration with e.g. QA engineers who can write the tests.

    • @ContinuousDelivery
      @ContinuousDelivery  Před 2 lety +5

      ...or 4) mark acceptance tests that represent features that are "in-development" as "in-development" and don't run them in the pipeline.
      I'd pick 4!

    • @torkleyy9168
      @torkleyy9168 Před 2 lety

      That makes sense thank you! And is there any room for tests not written by developers, or is that always a bad idea?

    • @ContinuousDelivery
      @ContinuousDelivery  Před 2 lety +1

      @@torkleyy9168 No it is ok, but the Dev team MUST own responsibility for the tests once they are written. They are the people who will change things that break the tests, so they should be first to see the breakage, and fix it immediately.

    • @andreasv9472
      @andreasv9472 Před rokem

      @@ContinuousDelivery I still don't understand the difference between ATDD and BDD? Seems to be the same thing?

    • @ContinuousDelivery
      @ContinuousDelivery  Před rokem +2

      It's a nuance, and probably doesn't matter much. I would make the distinction that BDD is about focusing your testing on evaluating the behaviour of the system. This can be useful whatever the nature of the test, BDD works for tiny, fine-grained TDD style tests or bigger, more complex, more whole-system functional tests.
      ATDD is the second one, but not the first.
      BDD was originally invented to cover the first case, to find a way to teach TDD better, but has become synonymous with the second, because of more heavy-weight tools like SpecFlow and Cucumber.
      So for practical purposes BDD == ATDD, but as someone who was in at the birth of BDD, I still find it's original aim useful and important.

  • @stanislavcoros
    @stanislavcoros Před 3 lety

    all what was said is normal logic. Nothing special. But some testers thinks they job is so out of this world like they think they are inhumans :D yet they are replacable by simple script

    • @ContinuousDelivery
      @ContinuousDelivery  Před 3 lety +8

      If all they do is translate some "test script" into key presses, then yes, but I think most testers bring more to the party than that.

  • @mml1224
    @mml1224 Před 2 lety +2

    helpful, but hands moving very distracting