Ara said:
you can basically do that too, but i continually forget which is
expected and which is actual and, as you know, that's a slippery error
to track down at times.
Perhaps - but it's one rule that only needs to be learned once.
I notice that testy supports check <name>, <expected>, <actual> too.
Testy does (intentially) force you to name your tests, whereas
Test::Unit will happily let you write
check <expected>, <actual>
I really don't like having to name each assertion, maybe because I'm
lazy or maybe because it feels like DRY violation. I've already said
what I want to compare, why say it again?
because littering the example code with esoteric testing framework
voodoo turns it into code in the testing language that does not
resemble how people might actually use the code
I agree with this. This is why I absolutely prefer Test::Unit (and
Shoulda on top of that) over Rspec.
i always end up writing
both samples and tests - one of the goals of testy is that, by having
a really simple interface and really simple human friendly output we
can just write examples that double as tests.
Hmm, this is probably an argument *for* having a DSL for assertions - to
make the assertions read as much like example code ("after running this
example, you should see that A == B and C < D")
Neither
result.check "bar attribute", :expected => 123, :actual => res.bar
nor
assert_equal 123, res.bar, "bar attribute"
reads particularly well here, I think. Ideally it should be as simple as
possible to write these statements of expectation. How about some eval
magic?
expect[
"res.foo == 456",
"res.bar == 123",
"res.baz =~ /wibble/"
]
Maybe need to pass a binding here, but you get the idea. (Before someone
else points it out, this is clearly a case which LISP would be very well
suited to handling - the same code to execute can also be displayed in
the results)
The problem here is reporting on expected versus actual, but perhaps you
could split on space and report the value of the first item.
expected:
- res.foo == 456
- res.bar == 123
unexpected:
-
test: res.baz =~ /wibble/
term: res.baz
value: "unexpected result"
Going too far this way down this path ends up with rspec, I think.
In fact, I don't really have a problem with writing
res.foo.should == 456
The trouble is the hundreds of arcane variations on this.
You solve this problem by only having a single test (Result#check), and
indeed if rspec only had a single method (should_equal) that would be
fairly clean too. However this is going to lead to awkwardness when you
want to test for something other than equality: e.g.
res = (foo =~ /error/) ? true : false
result.check "foo should contain 'error'", :expected=>true,
:actual=>res
Apart from being hard to write and read, that also doesn't show you the
actual value of 'foo' when the test fails.
Is it worth passing the comparison method?
result.check "foo should contain 'error'", foo, :=~, /error/
But again this is getting away from real ruby for the assertions, in
which case it isn't much better than
assert_match /error/, foo, "foo should contain 'error'"
assert_match /error/, foo # lazy/DRY version
get a listing of which tests/examples i can run
Yes, parseable results and test management are extremely beneficial.
Those could be retro-fitted to Test::Unit though (or whatever its
replacement in ruby 1.9 is called)
Getting rid of the at_exit magic is also worth doing.
you can also do something like this (experimental) to just make a
simple example
cfp:~/src/git/testy > cat a.rb
require 'testy'
Testy.testing 'my lib' do
test 'just an example of summing an array using inject' do
a = 1,2
a.push 3
sum = a.inject(0){|n,i| n += i}
end
end
Nice, could perhaps show the (expected) result inline too?
test 'an example of summing and array using inject' do
a = 1,2
a.push 3
sum = a.inject(0){|n,i| n += i}
end.<< 6
A bit magical though. Also, we can only test the result of the entire
block, whereas a more complex example will want to create multiple
values and test them all.
so the goal is making it even easier to have a user play with your
tests/examples to see how they work, and even to allow simple examples
to be integrated with your test suite so you make sure you samples
still run without error too. of course you can do this with test/unit
or rspec but the output isnt' friendly in the least - not from the
perspective of a user trying to learn a library, nor is it useful to
computers because it cannot be parsed - basically it's just vomiting
stats and backtraces to the console that are hard for people to read
and hard for computers to read. surely i am not the only one that
sometimes resorts to factoring out a failing test in a separate
program because test/unit and rspec output is too messy to play nice
with instrumenting code?
I agree. Part of the problem is that when one thing is wrong making 20
tests fail, all with their respective backtraces, it can be very hard to
see the wood for the trees. What would be nice would be a folding-type
display with perhaps one line for each failed assertion, and a [+] you
can click on to get the detail for that particular one.
yeah that's on deck for sure. i *do* really like contexts with
shoulda. but still
cfp:/opt/local/lib/ruby/gems/1.8/gems/thoughtbot-shoulda-2.9.1 > find
lib/ -type f|xargs -n1 cat|wc -l
3910
if we accept the research and assume that bugs scale linerarly with
the # of lines of code this is not good for robustness.
I disagree there - not with the research, but the implied conclusion
that you should never use a large codebase. Shoulda works well, and I've
not once found a bizarre behaviour in the testing framework itself that
I've had to debug, so I trust it.
(This is not true of other frameworks though. e.g. I spent a while
tracking this one down:
https://rspec.lighthouseapp.com/pro...677-rspec-doesnt-play-well-with-delegateclass)
this is one
of my main gripes with current ruby testing - my current rails app has
about 1000 lines of code and 25,000 lines of testing framework!
Yeah, but how many lines of Rails framework?
Cheers,
Brian.