01. Russ Cox - Go Testing By Example | GopherConAU 2023

Sdílet
Vložit
  • čas přidán 3. 01. 2024
  • Writing good tests is a critical part of software engineering that doesn't always get the attention it deserves. This talk will present strategies and concrete approaches you can use when writing your own programs, illustrated by some of the best tests in Go's own source code.
    Slides: docs.google.com/presentation/...
    About Russ Cox
    Russ is the co-creator of the Go programming language. He currently leads the development of Go at Google.
  • Věda a technologie

Komentáře • 7

  • @sinamobasheri
    @sinamobasheri Před 12 dny

    10:54 - Tip #4 Write exhaustive tests

  • @SkeletonLau
    @SkeletonLau Před 6 měsíci +6

    where can i find the "uncover" program

  • @erikkalkoken3494
    @erikkalkoken3494 Před 5 měsíci +11

    Mostly very good advise on how to write good tests and a great talk well worth watching.
    I disagree in one point though. Putting lot of logic into test cases is a bad idea. It makes them harder to understand and more likely to be buggy themselves. Instead, test code should be as trivial as possible (using tables for test cases is fine).
    This also means that it is totally fine to have redundant code in your test cases.

    • @michaelmoser4537
      @michaelmoser4537 Před 5 měsíci

      depends on type of test and number of test case.
      In case of an integration-test: if one is testing a scripting language with hundreds of test cases, then a table-driven test would be hard to maintain. In this case it is preferable to define a test case as the input file vs the expected output file.
      Now a table-driven test is ok for unit tests: a table-driven test is perfect for the binary search function example.

  • @jackypaulcukjati3186
    @jackypaulcukjati3186 Před 6 měsíci +3

    I would like to uncover uncover

  • @covle9180
    @covle9180 Před 5 měsíci +8

    Russ is clearly way more experienced than I'll ever be, so I'm probably wrong. I don't like obscure logic in tests. As the logic of the test might as well just be wrong.
    I also don't like the idea of custom mini languages and tests that 'correct' tests. It just seems like there's so much room to introduce problems into your tests.
    I want my tests easy to understand at a glance. For me and for the other people on the team. If I have to figure out some obscure custom mini language to understand whether my code is bad or the test is bad, I'm not happy.