This is actually interesting, I worked for a company whose goal was to achieve 100% code coverage, still buggy because devs didn’t know how to test properly (me included haha)
@@PenguinCrayon269 the point isn't only about "can be buggy", but because of many cases to test a simple functions that is hard to be done with small effort.
Code coverage is way different from code quality. Code coverage is used to determine if every test went over all the code lines. It's a way to determine what code isn't covered by tests. Companies use it to make sure that everything is tested thoroughly. And then this is just a step in the process. To ensure the code works, lots of tools are used. Code quality should be determined by developers/architects and a combination of tools can be used like, linters, code smell tools, making sure the code adheres to a coding standard, etc.
I disagree with the advice of more integration and e2e tests than unit test. I have joined a new project within a company in which they use integration/e2e tests with 0 unit test to cover all test cases using cypress and it is a total disaster since if you change one thing other tests from other repo fails, also takes time to execute thus slows the process of a CI/CD. After months of convincing the team accepted to integrate unit test with less integration and e2e test : “The Testing Pyramid”. I agree we should not just rely on the code coverage it’s up to the devs to code review properly even if it shows 100%.
An E2E test tells you that something has failed in the whole chain, e.g. if the class of an object has changed, the E2E test might fail. That would be easy to fix, but if something doesn't show up, it could be that different lines of code are causing the same problem. The different tests have different levels of resolution. The unit test is the one with the highest, where you can see exactly which line of code is causing the problem. The integration test on the other hand will only show you one module that has a problem and the e2e test will tell you that there is a problem.
Another common issue with the 100% metric is that real world code has to include error handling for things outside your control: for example an external network call failing. To test this, you need to be able to inject or mock that error. After a while, all you end up doing is testing your ability to simulate different types of error so that you can test they are handled correctly.
While i agree that high code coverage shouldn't be an end in itself, my experience is that pushing for reasonably high cc (>80%) does lead to better code, particularly in large / complex projects. "Better" does not only encompass SOLID and robustness, but also agile development and maintenance. Breaking changes are identified faster. It's easier to zero in on the hows and whys of unforeseen failures. This in turn makes for more efficient agile development, faster fixes when customers report some problem, new devs entering the project understand the granularity of the application faster etc. etc.
Agreed. Still the software should be properly explained to the new dev from the perspective of the end user and only then should said dev take a peek at the code and the documentation.
@@TheSliderW We house all our documentation on Backstage, our team philosophy is that if a dev who is unfamiliar with a repo/system can not find what they need to know, we must update the documentation. So far its been a great practice, we all add what needs to be known.
My project is a complex distributed web app. Initially we did not have any tests at all. Slowly we introduced unit tests but because of the distributed nature we were still seeing failures at client's ends. It was bad code writing for sure but it led to stakeholders loosing the confidence. We included cucumber suits then working along the BDD principle. It did give some confidence to the stakeholders. In the end it's a mix of tdd and bdd that will make you more confident.
For myself, code coverage testing is more for catching simple typos like in variable names that don't get caught until runtime or configuration errors for uncommon situations; I had one situation where it turns out I had not set up logging properly, and unfortunately I didn't catch it until it tried to log an error. I completely agree though that testing is hard, and unfortunately simple metrics like this are often emphasized because it's easier for managers and executive to understand, and because it can be used for marketing, even though 100% code coverage doesn't really mean anything as to how well it was actually tested.
One of the best benefits of unit testing is that it makes you look at the code you've written for longer. If getting to 100% coverage aids that, then I'm all for it.
@@slowtyper95 that’s not my point I know the point of the video is that sometimes 100 isn’t necessary nor possible but my point is that some devs don’t do any tests they just let clients take the wheel as their form of “testing” and if I were in that I might as well prefer the total unnecessary coverage
Now this comes down to the kind of people you work with. If your working with a bunch of devs that don't want to write any tests, it makes sense having a coverage metric to police everyone. But that practice gets you down the path of writing tests for the sake of meeting that quota, and you most likely will just give up on test quality. Plus test coverage metrics can be easily bypassed using ignore lists. So now you get into a race of putting in coverage rules that can be bend :)
Sure, it's an imperfect metric, but show me a thoroughly tested function that isn't 100% across the board; it's the bare minimum. If you have less, you have yet to prove it works or have redundant code blocks. Either way, you've got more work to do. If you reach a coverage threshold and stop there, then you are the problem.
I believe coverage is an important metric, focusing on quantity rather than quality. What’s more important than reaching a specific threshold is being able to monitor whether coverage decreases, which typically happens when new code is added without corresponding tests or when tests are removed. The situation you mentioned (deleting lines) can be managed by using skip comments (e.g., /* istanbul ignore next */). This way, we can deliberately specify which code is not being tested, rather than allowing coverage to unintentionally decline.
This is a great video. I’m a big fan of unit tests, but I’ve never understood the obsession with code coverage at some companies. I appreciate you sharing these insights!
I often see arguments like 100% code coverage won't guarantee that your code works as expected or that unit tests won't catch a certain bug as an excuse not to write unit tests and collect metrics on coverage. Though these statements are true, you will find that code with a high level of good unit tests is usually a lot better than code with low coverage or non. I also think that the argument around less unit tests and more integration tests is flawed. As unit tests are quicker to write and run and can also be written at the same time as your applcation code or before (TDD), you almost certainly want many more unit test than integration or E2E tests. You wouldn't want to test a complex calculations or validtion rules using integration tests or E2E tests. This isn't to say that integration or E2E aren't just as important. Both have there place and should be used appropriately.
imo if you have a team of good devs and they know what they are doing then sure, enforcing coverage is pointless.... But more often than not a team will have some people just work to get paid, so 100% is arguably the easier way to enforce them to think before making a code change. Also 100% coverage eliminate the coverage drop issue lol. Definitely agree sometimes 100% is a churn, But with tools like AI, it isn't that hard to do and it can kinda stop others from breaking my code
Agreed! If a unit test fails, fix or investigate it before pushing to the CI pipeline to avoid disrupting the team's dev app. Highly beneficial in my company and serves a different purpose than e2e testing.
One potential answer to your concern of 100% coverage but not 100% functionality test (i.e. your isEven function) is mutation testing. Look at Stryker and others.
Unit tests and code coverage are very important, but these metrics do not create a large sense of confidence. Adding code quality tools for linting, code quality scanning, OSS vulnerability scans, etc. are all very important. At my company, in my team, we have an 80% threshold for coverage, but we also need to pass all the quality gates beyond that before code can be merged. Also, testing 80+% of code doesn’t a) mean you wrote the right / correct tests b) cover all edge cases c) validate that all the functional requirements for the feature(s) were even validated. Coverage != full confidence the code does everything in all the different cases you expect it to. These can be further captured through integration and e2e testing. It’s such a blurry area of the SDLC. Every team and company have different opinions and processes.
This is a good point, my company also has a lot of integration and e2e tests, but they run very long because each test has to ensure isolation and requires more initial stuff, which greatly hinders our code deployment and iteration time. As a result, we prefer to run more unit tests instead because they are faster.
This example is a bit misleading. Clearly if you drop few lines of production code and your target coverage was not 100% there is a chance that the coverage goes down if the lines you removed happen to be actually covered by tests. However, the chance of this happening in a real, professional codebase is slim because the total number of lines will most likely not be something like eight. 😂 If you do drop a full module, then it's not weird that you may need to recalibrate your test coverage in other areas, if you did fall under the target level
Unit test made me think how to write code that doesn’t require to write multiple test cases exactly the same with Problem#3 and avoid writing code like Problem#2 So it can take of the some burden off from Coverage Policy When refactor all the code base just delete it all and write the whole new unit test again without any look back also manual testing on integration still require because I cant get any of confident level from my unit test
It's all well and good complaining about test coverage. It doesn't come close to proving the code works. Would you rather be totally blind about what lines have been touched by tests? As you went on to prove, writing good tests requires testing 100% of the branches and then some. It's worth pointing out that removing lines isn't an issue when you have 100% coverage.
If branch coverage of pure function is less than 100% it means that there is either unreachable code in that function or there is not enough tests to test all cases - in both situations some code or test changes are needed.
You could achieve 100% code coverage when doing true test-driven-development. With TDD, you only need to write the bare minimum code to pass the tests, and only enough tests until they fail.
In general, if this video demoralizing you to write unit test, you would be wrong. I think main intent of the video is, dont just write unit test, write it properly and check all possible scenarios. Dont write unit test just for happy path. I have been working with industry around 15 years now, i have seen if you actually write unit test properly, it can save you from multiple consequences and essentially it will tell you if any code changes broke something else. If you dont test those, then you can't tell whether another PR from your colleague broke something else. Typically i aim for 100% on all four sections, not for only pipeline, also to make sure, my code works on every edge case scenarios. It takes times, but, it is worth it.
This is why test driven development makes sense. You first write a test for what the code should do (the intent) and then you fulfill it. Also, mutation testing can give you an idea on the quality of your tests. It checks how well each line/block/method is covered.
Code coverage is the easiest to falsify, it is easy to abuse testing and assume things work without doing the work properly just make your life's easier test what should be tested and have people that actually know how to test.
Because in the file being tested he helpfully had a load of unused code in a redundant, unused function. By deleting a line from the function that was being executed, the code coverage reduced. Seems a bizarre and contrived example. In that situation a developer would simply remove the redundant code or relocate it until it is needed. Without the redundant function he would have had 100% coverage before and after the bug fix. But that would not have suited the narrative.
love your content but 100% code coverage is not about "better code". ( we can all write terrible code and a test to cover it). It is to 1. show seniors and stakeholders that the code has, at least, been checked by something at a Unit level. Think of it like a dotting of the 'i's and a crossing of the 't's. 2. To 'Lock' code down for breaking changes. The next developer, in order to change the methods behaviour/output, will also have to update the test. 3. Documents methods. 4. Promotes SOLID coding principles ( devs are not going to be lazy creating methods with multiple nested branches if they know they have to write a test to cover it , for example ) All these reasons are from a project management benefit. That's why 80 - 100% code coverage is a thing,.
*SOLID* i assume you mean to make many tiny functions. what a lot of bull. you should test interface not implementation. don't test all functions. just test the public ones.
If I hear 100% code coverage all I hear is a manager. However if they say things like covering the most important features/areas now we're talking 😊 like priorities and importance
Strongly Disagree. Addressing Problem #1: Having 100% code coverage will solve the "coverage drop" issue, where deleting lines would increase coverage. Deleting 100% covered code will still be 100% covered. Addressing Problem #2: You are doing a straw man argument. You are talking about writing good intentional readable tests, which I agree, people should do. But how does it relate to coverage at all? Addressing Problem #3: "Unit tests is the least useful tests" - Unit tests aren't supposed to enforce correctness, they are used to ensure you write good testable code. Testable code IS good code, and unit tests are a guard rail to help developers do this. This means coverage is used as a litmus test for pull-requests to help check if every line of code is deliberate and intentional. I have had many times where I see code which will never get executed get checked in. We then adopted a 100% coverage strategy, and now that issue is totally eliminated. You are also giving your viewers a false trade-off. Unit tests OR integration/e2e tests. Why not all of them? Use ALL types of testing. Why pit them against each other to cause division? If you are reading this and still aren't convinced, then just know that explicit is better than implicit. Add the comment /* v8 ignore next 10 */ etc to mark the next 10 lines to be uncovered. I usually tell my team to write a comment above this to, to state WHY it is uncovered. That way, the code is still 100% covered and will give better transparency to us all.
I always liked the idea of mutation tests, but found them high maintenance, with a relatively low ROI in practice. Maybe I need to give them another try.
@@jamesadcock4252 yeah, they can be very high maintenance but you can tune the mutation report settings and make it less tedious to work with, then going through the report gives a lot feedback on different pit falls in your unit tests and your code as well. I like to run them from to time to time and find areas of improvement.
Quantifying test coverage is like scoring ice skating. It's not like scoring in basketball. You can hit every line and still have buggy code if your tests are engineered to get coverage and not to test functionality. Take it from me. I've written both. Good unit tests make less buggy code but tests for coverage numbers make bosses who think we're playing basketball happy.
It's nothing new, it's just the persona he puts on when he's explaining things to us. It's actually an ingredient of his success, you might want to look up the theory of "cinematic movement". Every interesting video has movement, and then you'll notice the blandness of other CZcams creators that aren't influential. Success is about more than the content, it needs to be interesting and effective.
The pronunciation is similar to the truth, but not perfect. It is possible to make a custom setting on web resources - pronunciation of the name. So that it would be possible to upload a sound file and a record in the international phonetic alphabet. Best regards.
Any code coverage requirement is stupid. Unit tests should be used only where necessary. Code coverage requirements force developers to add unit tests where they are not helpful.
js sucks how do you expect the coverage analyzer to be aware of how covered is your code when you don't even have types? ffs I agree that the metric is useless and I seldom use it, as the one developing the feature you should be thinking about the cases when coding
This is actually interesting, I worked for a company whose goal was to achieve 100% code coverage, still buggy because devs didn’t know how to test properly (me included haha)
They do that to justify their work
agree :))!
the tests themselves are also code and all codes can be buggy. 🗿
@@PenguinCrayon269 the point isn't only about "can be buggy", but because of many cases to test a simple functions that is hard to be done with small effort.
Must be toxic and hectic no
Code coverage is way different from code quality.
Code coverage is used to determine if every test went over all the code lines. It's a way to determine what code isn't covered by tests.
Companies use it to make sure that everything is tested thoroughly.
And then this is just a step in the process. To ensure the code works, lots of tools are used.
Code quality should be determined by developers/architects and a combination of tools can be used like, linters, code smell tools, making sure the code adheres to a coding standard, etc.
I disagree with the advice of more integration and e2e tests than unit test. I have joined a new project within a company in which they use integration/e2e tests with 0 unit test to cover all test cases using cypress and it is a total disaster since if you change one thing other tests from other repo fails, also takes time to execute thus slows the process of a CI/CD. After months of convincing the team accepted to integrate unit test with less integration and e2e test : “The Testing Pyramid”. I agree we should not just rely on the code coverage it’s up to the devs to code review properly even if it shows 100%.
An E2E test tells you that something has failed in the whole chain, e.g. if the class of an object has changed, the E2E test might fail. That would be easy to fix, but if something doesn't show up, it could be that different lines of code are causing the same problem. The different tests have different levels of resolution. The unit test is the one with the highest, where you can see exactly which line of code is causing the problem. The integration test on the other hand will only show you one module that has a problem and the e2e test will tell you that there is a problem.
I was floored when he started trashing unit tests in favor of e2e! It's called the testing pyramid for a reason!
Another common issue with the 100% metric is that real world code has to include error handling for things outside your control: for example an external network call failing. To test this, you need to be able to inject or mock that error. After a while, all you end up doing is testing your ability to simulate different types of error so that you can test they are handled correctly.
While i agree that high code coverage shouldn't be an end in itself, my experience is that pushing for reasonably high cc (>80%) does lead to better code, particularly in large / complex projects. "Better" does not only encompass SOLID and robustness, but also agile development and maintenance. Breaking changes are identified faster. It's easier to zero in on the hows and whys of unforeseen failures. This in turn makes for more efficient agile development, faster fixes when customers report some problem, new devs entering the project understand the granularity of the application faster etc. etc.
this! as long as the test suit is good.
Agreed. Still the software should be properly explained to the new dev from the perspective of the end user and only then should said dev take a peek at the code and the documentation.
@@TheSliderW We house all our documentation on Backstage, our team philosophy is that if a dev who is unfamiliar with a repo/system can not find what they need to know, we must update the documentation. So far its been a great practice, we all add what needs to be known.
My project is a complex distributed web app. Initially we did not have any tests at all. Slowly we introduced unit tests but because of the distributed nature we were still seeing failures at client's ends. It was bad code writing for sure but it led to stakeholders loosing the confidence. We included cucumber suits then working along the BDD principle. It did give some confidence to the stakeholders. In the end it's a mix of tdd and bdd that will make you more confident.
For myself, code coverage testing is more for catching simple typos like in variable names that don't get caught until runtime or configuration errors for uncommon situations; I had one situation where it turns out I had not set up logging properly, and unfortunately I didn't catch it until it tried to log an error.
I completely agree though that testing is hard, and unfortunately simple metrics like this are often emphasized because it's easier for managers and executive to understand, and because it can be used for marketing, even though 100% code coverage doesn't really mean anything as to how well it was actually tested.
One of the best benefits of unit testing is that it makes you look at the code you've written for longer. If getting to 100% coverage aids that, then I'm all for it.
I take 100 over 0 all day any day; lots of dev never wants to write any test and prefer to just ship it to prod and have the client complain
I think the point is not about 0 or 100. It's rather about doing actual tests that really matter that will result the coverage up to only 60-70%
@@slowtyper95 that’s not my point I know the point of the video is that sometimes 100 isn’t necessary nor possible but my point is that some devs don’t do any tests they just let clients take the wheel as their form of “testing” and if I were in that I might as well prefer the total unnecessary coverage
Now this comes down to the kind of people you work with. If your working with a bunch of devs that don't want to write any tests, it makes sense having a coverage metric to police everyone. But that practice gets you down the path of writing tests for the sake of meeting that quota, and you most likely will just give up on test quality. Plus test coverage metrics can be easily bypassed using ignore lists. So now you get into a race of putting in coverage rules that can be bend :)
Sure, it's an imperfect metric, but show me a thoroughly tested function that isn't 100% across the board; it's the bare minimum. If you have less, you have yet to prove it works or have redundant code blocks. Either way, you've got more work to do. If you reach a coverage threshold and stop there, then you are the problem.
I wonder how huge sites, entire OS, and apps have even been able to run properly without tests in the past hundred years.
There is a concept called Test Pyramid. Also to check the quality of the unit tests, we can try mutation testing.
I believe coverage is an important metric, focusing on quantity rather than quality. What’s more important than reaching a specific threshold is being able to monitor whether coverage decreases, which typically happens when new code is added without corresponding tests or when tests are removed. The situation you mentioned (deleting lines) can be managed by using skip comments (e.g., /* istanbul ignore next */). This way, we can deliberately specify which code is not being tested, rather than allowing coverage to unintentionally decline.
is not useless, it just overkill testing like over engineering for no reason. testing is important, 100% coverage is overkill!
It's just Goodhart's Law.
When a measure becomes a target, it ceases to be a good measure.
Agree 100%. Been there, done that. It also makes devs code stupid tests just to fulfill coverage
This is a great video. I’m a big fan of unit tests, but I’ve never understood the obsession with code coverage at some companies. I appreciate you sharing these insights!
I often see arguments like 100% code coverage won't guarantee that your code works as expected or that unit tests won't catch a certain bug as an excuse not to write unit tests and collect metrics on coverage. Though these statements are true, you will find that code with a high level of good unit tests is usually a lot better than code with low coverage or non. I also think that the argument around less unit tests and more integration tests is flawed. As unit tests are quicker to write and run and can also be written at the same time as your applcation code or before (TDD), you almost certainly want many more unit test than integration or E2E tests. You wouldn't want to test a complex calculations or validtion rules using integration tests or E2E tests. This isn't to say that integration or E2E aren't just as important. Both have there place and should be used appropriately.
imo if you have a team of good devs and they know what they are doing then sure, enforcing coverage is pointless.... But more often than not a team will have some people just work to get paid, so 100% is arguably the easier way to enforce them to think before making a code change.
Also 100% coverage eliminate the coverage drop issue lol.
Definitely agree sometimes 100% is a churn, But with tools like AI, it isn't that hard to do and it can kinda stop others from breaking my code
Agreed! If a unit test fails, fix or investigate it before pushing to the CI pipeline to avoid disrupting the team's dev app. Highly beneficial in my company and serves a different purpose than e2e testing.
One potential answer to your concern of 100% coverage but not 100% functionality test (i.e. your isEven function) is mutation testing. Look at Stryker and others.
Unit tests and code coverage are very important, but these metrics do not create a large sense of confidence.
Adding code quality tools for linting, code quality scanning, OSS vulnerability scans, etc. are all very important.
At my company, in my team, we have an 80% threshold for coverage, but we also need to pass all the quality gates beyond that before code can be merged.
Also, testing 80+% of code doesn’t
a) mean you wrote the right / correct tests
b) cover all edge cases
c) validate that all the functional requirements for the feature(s) were even validated. Coverage != full confidence the code does everything in all the different cases you expect it to.
These can be further captured through integration and e2e testing.
It’s such a blurry area of the SDLC. Every team and company have different opinions and processes.
This is a good point, my company also has a lot of integration and e2e tests, but they run very long because each test has to ensure isolation and requires more initial stuff, which greatly hinders our code deployment and iteration time. As a result, we prefer to run more unit tests instead because they are faster.
This example is a bit misleading. Clearly if you drop few lines of production code and your target coverage was not 100% there is a chance that the coverage goes down if the lines you removed happen to be actually covered by tests. However, the chance of this happening in a real, professional codebase is slim because the total number of lines will most likely not be something like eight. 😂 If you do drop a full module, then it's not weird that you may need to recalibrate your test coverage in other areas, if you did fall under the target level
Unit test made me think how to write code that doesn’t require to write multiple test cases exactly the same with Problem#3 and avoid writing code like Problem#2
So it can take of the some burden off from Coverage Policy
When refactor all the code base just delete it all and write the whole new unit test again without any look back
also manual testing on integration still require because I cant get any of confident level from my unit test
It's all well and good complaining about test coverage. It doesn't come close to proving the code works. Would you rather be totally blind about what lines have been touched by tests?
As you went on to prove, writing good tests requires testing 100% of the branches and then some. It's worth pointing out that removing lines isn't an issue when you have 100% coverage.
Use sonar cube with branch coverage. Atleast achieve 80%. This should help a bit
Hi Kyle, Informative as always, Can you also create a video on Testing Bounary in Unit tests?
If branch coverage of pure function is less than 100% it means that there is either unreachable code in that function or there is not enough tests to test all cases - in both situations some code or test changes are needed.
You could achieve 100% code coverage when doing true test-driven-development. With TDD, you only need to write the bare minimum code to pass the tests, and only enough tests until they fail.
Maybe you can check on mutation testing score?
In general, if this video demoralizing you to write unit test, you would be wrong. I think main intent of the video is, dont just write unit test, write it properly and check all possible scenarios. Dont write unit test just for happy path.
I have been working with industry around 15 years now, i have seen if you actually write unit test properly, it can save you from multiple consequences and essentially it will tell you if any code changes broke something else. If you dont test those, then you can't tell whether another PR from your colleague broke something else.
Typically i aim for 100% on all four sections, not for only pipeline, also to make sure, my code works on every edge case scenarios. It takes times, but, it is worth it.
This is why test driven development makes sense. You first write a test for what the code should do (the intent) and then you fulfill it.
Also, mutation testing can give you an idea on the quality of your tests. It checks how well each line/block/method is covered.
Code coverage is the easiest to falsify, it is easy to abuse testing and assume things work without doing the work properly just make your life's easier test what should be tested and have people that actually know how to test.
Bro create Penpot plugins 🎉
Can somebody explain why code coverage goes down by removing a line?
Because in the file being tested he helpfully had a load of unused code in a redundant, unused function. By deleting a line from the function that was being executed, the code coverage reduced.
Seems a bizarre and contrived example. In that situation a developer would simply remove the redundant code or relocate it until it is needed.
Without the redundant function he would have had 100% coverage before and after the bug fix. But that would not have suited the narrative.
love your content but 100% code coverage is not about "better code". ( we can all write terrible code and a test to cover it). It is to
1. show seniors and stakeholders that the code has, at least, been checked by something at a Unit level. Think of it like a dotting of the 'i's and a crossing of the 't's.
2. To 'Lock' code down for breaking changes. The next developer, in order to change the methods behaviour/output, will also have to update the test.
3. Documents methods.
4. Promotes SOLID coding principles ( devs are not going to be lazy creating methods with multiple nested branches if they know they have to write a test to cover it , for example )
All these reasons are from a project management benefit. That's why 80 - 100% code coverage is a thing,.
The idea is coverage is useless but testing not test focus on functionality
Point 4 is everything. Wish more ppl knew this
*3. Document methods*
use static type language 💀
*SOLID*
i assume you mean to make many tiny functions. what a lot of bull. you should test interface not implementation. don't test all functions. just test the public ones.
@@PenguinCrayon269 Typecheckers cant tell what a devs intention is
If I hear 100% code coverage all I hear is a manager. However if they say things like covering the most important features/areas now we're talking 😊 like priorities and importance
Strongly Disagree.
Addressing Problem #1: Having 100% code coverage will solve the "coverage drop" issue, where deleting lines would increase coverage. Deleting 100% covered code will still be 100% covered.
Addressing Problem #2: You are doing a straw man argument. You are talking about writing good intentional readable tests, which I agree, people should do. But how does it relate to coverage at all?
Addressing Problem #3: "Unit tests is the least useful tests" - Unit tests aren't supposed to enforce correctness, they are used to ensure you write good testable code. Testable code IS good code, and unit tests are a guard rail to help developers do this. This means coverage is used as a litmus test for pull-requests to help check if every line of code is deliberate and intentional. I have had many times where I see code which will never get executed get checked in. We then adopted a 100% coverage strategy, and now that issue is totally eliminated.
You are also giving your viewers a false trade-off. Unit tests OR integration/e2e tests. Why not all of them? Use ALL types of testing. Why pit them against each other to cause division?
If you are reading this and still aren't convinced, then just know that explicit is better than implicit. Add the comment /* v8 ignore next 10 */ etc to mark the next 10 lines to be uncovered. I usually tell my team to write a comment above this to, to state WHY it is uncovered. That way, the code is still 100% covered and will give better transparency to us all.
Excellent comment and fully agree. Some misleading narrative in this video, especially for inexperienced developers.
Use mutation tests to prove your tests and coverage.
I always liked the idea of mutation tests, but found them high maintenance, with a relatively low ROI in practice. Maybe I need to give them another try.
@@jamesadcock4252 yeah, they can be very high maintenance but you can tune the mutation report settings and make it less tedious to work with, then going through the report gives a lot feedback on different pit falls in your unit tests and your code as well.
I like to run them from to time to time and find areas of improvement.
Quantifying test coverage is like scoring ice skating. It's not like scoring in basketball. You can hit every line and still have buggy code if your tests are engineered to get coverage and not to test functionality. Take it from me. I've written both. Good unit tests make less buggy code but tests for coverage numbers make bosses who think we're playing basketball happy.
bro why u shake ur head all the time left and right XD
It's nothing new, it's just the persona he puts on when he's explaining things to us. It's actually an ingredient of his success, you might want to look up the theory of "cinematic movement". Every interesting video has movement, and then you'll notice the blandness of other CZcams creators that aren't influential. Success is about more than the content, it needs to be interesting and effective.
that's just how he is, at least when recording tutorials. not a big deal.
The pronunciation is similar to the truth, but not perfect. It is possible to make a custom setting on web resources - pronunciation of the name. So that it would be possible to upload a sound file and a record in the international phonetic alphabet. Best regards.
Unit tests without a DB call are useless.
- Don't try to change my mind 😅
Bruh wtf do you think a unit test is?
@@lordluke10 I said don't try to change my mind 🙃
third/fourth
Any code coverage requirement is stupid. Unit tests should be used only where necessary. Code coverage requirements force developers to add unit tests where they are not helpful.
I am second
why does nobody talking about this
It is not useless...
100% of the right code should be tested, not 100% of all code.
Hello I'm the first
No
Hello first, I'm Dad
js sucks
how do you expect the coverage analyzer to be aware of how covered is your code when you don't even have types?
ffs
I agree that the metric is useless and I seldom use it, as the one developing the feature you should be thinking about the cases when coding
First comment