That may seem a bit naive, since it's really a start-up project that's two years old, give or take. However, there are three reasons for my decision.
- Because it was a rapid start-up project that was done without "real" Agile development. It accrued a surprising amount of code debt in its early days.
- Because the database is non-Rails in some serious ways. Particularly in its use of compound keys and the fact that it's spread over three databases instead of one. There are good business reasons for these decisions, but they affect our use of Rails.
- Because of the lack of robust testing.
- What are our overall testing goals, really?
- How well are we achieving them, currently?
- What are the most pressing needs?
- How can we best apply our time to improve tests?
- Facilitate careful development (TDD).
- Prevent regressions from being deployed.
- Create enough trust to allow refactoring.
- View and controller tests are really much like "unit" tests, since they are testing a method on a class... it's just a different "kind" of class.
- It's okay to throw away unit tests (including controller/view), as long as the features they are supporting are all tested with higher-level tests, and those tests are passing. Implementation methods change, some of the requirements of the past are likely to change, too.
- Try to write tests that are not fragile (meaning: refactoring doesn't fail tests). The underlying idea is that when a feature is complete, all tests should pass and when the feature is incomplete, some tests should fail. Period. The interals are less important, though someone may have a need to test them (at the Unit level) to help with developement.
- The blackbox, factory approach is more appropriate than mocks and specs in EOL's environment. But we need to make the domain logic approachable to developers (new ones in particular). Copy/pasting solutions from other tests /works/, but is not ideal. We need to make this easier to use, more convenient. Developers should /want/ to write tests, becaise it's easy to do. Newcomers have varying levels of domain knowledge.
1 comment:
Looks like good questions and good decisions, TDD usually means low level testing of particular nooks and crannies of implementation. Which not normally needed for API, high level testing.
Without TDD it is hard to move forward and be sure you did the right thing, but accumulation of TDD tests in a complex project brings fragility. What would be a solution to have both, continue TDD development but make it in a such a way that it does not make tests increasingly more fragile.
Does it mean something like tdd directory in specs and automatic cleaning of this directory before the deployment testing?
Post a Comment