Monday, October 19, 2009

Questions for an Interview

QUESTION: Based on your resume, you have [seen lots of different management styles OR had extended experience with a single employer]. With that experience, I'm sure you've seen your share of good and bad management practices. What are some of the worst things you have seen management do? What style of management works best for you?

Of course, this is a trap! This is a good indication of whether or not the person will fit into the management style you use. Typically, the applicant will answer "micro-management" (as the worst style), and that they prefer someone a little more hands-off. ...I find this to be a cop-out. A good answer here would use a specific example of something that management did poorly and how the applicant attempted to resolve it. The best answers to "best for you" would involve managers that give the applicant the tools they need to succeed. Such answers demonstrate that the person knows how a team actually works, that they are driven, and that they can overcome obstacles.



QUESTION: What do you think are the most important qualities of [good code]?

...Note that this could be applied to pretty much any trade, just be sure you're asking about the resulting product, not the position--that's a different question.

Any answers that demonstrate they have thought about this (hopefully, a LOT), and they've formulated their own opinions about what makes the product good. of course, you can choose to add your own interpretation to their answers to see if they are a good fit with your company, too: do they share values with the other workers?

QUESTION: What do you think are the most important traits for a [developer]?

...Note that, of course, you can change "developer" for anything you want.

You want them--WITHOUT PROMPTING--to list off their own best traits, and to explain how they posses them. If you have to prompt them with a question like "...and how do you think you measure up to these?", then you've found an applicant that isn't very applied and is probably low-confidence. It comes out naturally with good applicants. And, again, you want to compare their answer with the values of your current employes to see if it's a good match.

QUESTION: What makes you a remarkable candidate? What do you think makes you stand out from the other applicants?

Well, you may or may not want to see some humility here ("I don't know if this makes me *remarkable*, but..."); that's up to you. But this answer yields a good idea about their level of confidence and what they like to do most. ...Not necessarily what they're best at, since their answer is so biased. : )

QUESTION: Out of all the people you've worked with, who would you say you respected most, and why?

[after they answer]

...If I were to ask that person to describe you, what would they say?

Well, I'm not sure, but the point of this question is to get the applicant to think about two things and how they relate: what they like most in a co-worker, and what co-workers probably think of them.

QUESTION: Assuming you get this job, what would be your biggest fear? What do you feel could go horribly wrong?

If nothing else, I think this question gives you a chance to see how the applicant handles a stressful, difficult question. Ideally, they would answer honestly, and tell you what the biggest potential incompatibility would be. However, I find that most applicants shrug this question off and use it to make a joke. I've also seen applicants really choke on it, which (to me) was a good indication that they weren't likely to work well under stress, and that they aren't comfortable being honest with authority figures.

Thursday, October 15, 2009

Kids' contest: art and nature

http://www.gettoknow.ca/us/contest/

The contest, recently launch in California for the first time and currently accepting entries, is an invitation for youth (age 18 and younger) to go outside (or to a zoo, aquarium, or museum), to "get to know" their wild neighbors, and then to share their experience with the world by creating an original work of art, photography, or writing and then entering their creation in the contest.


...Perhaps something EOL could get involved with?  Sounds like it's CA-only at the moment, but I could be mistaken.

Thursday, July 16, 2009

Legacy Doesn't Always Mean "Old"

I've decided the EOL website development project is a "Legacy Code" project.

That may seem a bit naive, since it's really a start-up project that's two years old, give or take.  However, there are three reasons for my decision.
  1. Because it was a rapid start-up project that was done without "real" Agile development.  It accrued a surprising amount of code debt in its early days.
  2. Because the database is non-Rails in some serious ways. Particularly in its use of compound keys and the fact that it's spread over three databases instead of one.  There are good business reasons for these decisions, but they affect our use of Rails.
  3. Because of the lack of robust testing.
It's the latter case that I am most interested in addressing first and with highest priority.  Our team agrees, but also points out that there is a lot of work needed in the area.  We have lots of tests!  But they are difficult to use, fragile in some cases, slow, and so on.  Given these problems, we had a 90 minute meeting today, where we asked four questions:
  • What are our overall testing goals, really?
  • How well are we achieving them, currently?
  • What are the most pressing needs?
  • How can we best apply our time to improve tests?
I found these to be very effective!  To answer the first one, we developed the following three answers:

  1. Facilitate careful development (TDD).
  2. Prevent regressions from being deployed.
  3. Create enough trust to allow refactoring.
To the next three questions, we had a lively and effective discussion, with lots of excellent ideas and approaches discussed.  The end result was a list of concerns, another list of "action items" to address the most important among them, and, interestingly, a list of "philosophies" about how to approach tests.  They are somewhat specific to EOL, but they were cool enough that I feel compelled to share them:

  • View and controller tests are really much like "unit" tests, since they are testing a method on a class... it's just a different "kind" of class.
  • It's okay to throw away unit tests (including controller/view), as long as the features they are supporting are all tested with higher-level tests, and those tests are passing.  Implementation methods change, some of the requirements of the past are likely to change, too.
  • Try to write tests that are not fragile (meaning: refactoring doesn't fail tests).  The underlying idea is that when a feature is complete, all tests should pass and when the feature is incomplete, some tests should fail.  Period.  The interals are less important, though someone may have a need to test them (at the Unit level) to help with developement.
  • The blackbox, factory approach is more appropriate than mocks and specs in EOL's environment. But we need to make the domain logic approachable to developers (new ones in particular).  Copy/pasting solutions from other tests /works/, but is not ideal.  We need to make this easier to use, more convenient.  Developers should /want/ to write tests, becaise it's easy to do.  Newcomers have varying levels of domain knowledge.

Friday, July 10, 2009

On Creating Effective Web Interfaces

At EOL, we've been asked to implement some new features that we (the developers) didn't like. They didn't seem to adhere to web-standards, didn't look particularly good, and didn't really seem to capture the problem (caveat: as we see it) elegantly.

This sparked a discussion (healthy, not heated) about how we could best approach design and web usability as we add features. Another developer wrote a blog-post about it.  (...internal-only; sorry, no link for you!)  I felt compelled to talk about the same subject myself and field some ideas.  His post was all well-formatted, edited, and technical.  Mine is casual and sloppy.  So it goes.  ; )

So what's the problem?

We're using Agile methods, so the proximal problem--which we are already trying to address--is that the temptation is strong to just "get something out there".  Users can respond to it, and we can quickly adjust our approach to build something they will like better.

This has a few negative consequences:
  1. The code gets ugly and risks leaving remnants of old styles when it changes.  Have a look at our CSS, for example.  :(
  2. I think users can get scared away.  At least for me, I know I've tried to use some feature of, say, Yahoo! Mail, hated it, and never looked at it again.  ...They may very well have cleaned things up later, but I don't care to look.
  3. User feedback tends to... well... suck.  In my experience, they'll ask for things that actually result in clutter.  Also, it's only the squeaky wheels that get heard, and their needs may be vastly different from the majority.
  4. There is a big but poorly-understood difference between a feature that's easy to use and one that is "intuitive".  The latter implies that a newcomer can step in and use the tool... well... intuitively.  This is great!  But it often creates an interface that wears on your quickly, or ends up something you hate over time.  Photoshop's clone tool is a good example of a tool that's easy to use, but not at all intuitive.  It's important to know which is better for your problem. I think that user-generated feedback generally comes from users who have tinkered with an interface for something on the order of minutes.  They're talking about intuitive use, not easy use.
...There are probably others, but I'm getting bored.  ; )

The core problem, I think, is that we really need someone skilled at web interfaces.  But we don't have the money to add someone to the team to do this, and everyone currently on the team is too busy to pick up the skill and/or apply it at the expense of their current duties.

In short, we are left with only one option: establish some guidelines by which we can do the best we can, on our own.

To that end, I like the idea that my associate mentioned (interestingly enough, by quoting yet another associate of mine): a list of usability questions that can be asked to gauge whether an interface works or not.

...I think this is a reasonable start.  That said:
  • I don't know who we should ask these questions.
  • I worry that we'll end up with design-by-committee interfaces, which usually suck.  Ideally, I hope that design can be limited to two people on any given feature.  ...Ideally-ideally, the same two people on every feature, to maintain consistency.
  • Nothing in the questions mentions web standards/best-practices.
  • I worry that this is going to take up more time than we have, and breed frustration at the slow implementation of new features.
  • I think the questions are good for measuring usability, but there are other concerns, such as intuitive-vs-easy, elegance, and the "message" conveyed by each feature.  I hope we can add questions into the mix so that these things are considered.
  • I think it's worth investing two or three people's time over a day or two (each) to research usability, and contribute to the "checklist" that we're building.  This could include best-practices, design methodology, questions to ask, and the like.
I could go on.  This is a fascinating subject with many different possible solutions.  ...Sadly, I don't think any of them are even remotely as good as "hire an expert".  : (

Tuesday, June 2, 2009

"rake stats" should count all my stuff!

We have some custom directories for Specs, which aren't counted when you run "rake stats".  I thought we deserved more credit!  : )

I put at the bottom of our Rakefile.  It's a little... brute-force, as it trumps RSpec's own task of the same name, but it does the trick:

# We have some pretty customized stat directories, so:
require 'spec/rake/spectask'
namespace :spec do
# Setup specs for stats
task :statsetup do
require 'code_statistics'
::STATS_DIRECTORIES << %w(Blackbox\ specs spec/blackbox) if File.exist?('spec/blackbox')
::STATS_DIRECTORIES << %w(Model\ specs spec/models) if File.exist?('spec/models')
::STATS_DIRECTORIES << %w(View\ specs spec/views) if File.exist?('spec/views')
::STATS_DIRECTORIES << %w(Controller\ specs spec/controllers) if File.exist?('spec/controllers')
::STATS_DIRECTORIES << %w(Selenium\ specs spec/selenium) if File.exist?('spec/selenium')
::STATS_DIRECTORIES << %w(Helper\ specs spec/helpers) if File.exist?('spec/helpers')
::STATS_DIRECTORIES << %w(Library\ specs spec/lib) if File.exist?('spec/lib')
::STATS_DIRECTORIES << %w(Cucumber\ features features) if File.exist?('features')
::CodeStatistics::TEST_TYPES << "Model specs" if File.exist?('spec/models')
::CodeStatistics::TEST_TYPES << "View specs" if File.exist?('spec/views')
::CodeStatistics::TEST_TYPES << "Blackbox specs" if File.exist?('spec/blackbox')
::CodeStatistics::TEST_TYPES << "Selenium specs" if File.exist?('spec/selenium')
::CodeStatistics::TEST_TYPES << "Controller specs" if File.exist?('spec/controllers')
::CodeStatistics::TEST_TYPES << "Helper specs" if File.exist?('spec/helpers')
::CodeStatistics::TEST_TYPES << "Library specs" if File.exist?('spec/lib')
::CodeStatistics::TEST_TYPES << "Cucumber features" if File.exist?('features')
::STATS_DIRECTORIES.delete_if {|a| a[0] =~ /test/}
end
end

Friday, May 15, 2009

RCov Gem Installation problem

I couldn't get RCov installed this week:
$ sudo gem install rcov --trace
ERROR:  While executing gem ... (OptionParser::InvalidOption)
    invalid option: --trace
Poking around a wee bit, I discovered that this was just rubyforge being obnoxious.  Go download the tarball form of the file and then just "sudo gem install [name of the tarball]".  All good.

Friday, March 20, 2009

Testing Your WebApp

I've been looking into the various ways to test web applications, including performance, acceptance tests, and cross-browser compatibility.  In my travels over the past week-and-a-half, I have decided there are three levels to this:
  1. Testing the design of your site.  Does it look similar enough in all supported browsers?
  2. Testing the function of your site.  Does all the Javascript work?  Is everything that needs to render rendering?
  3. Testing the reliability of your site.  How fast is it?  How many users can it handle?  For how long?
Design Testing

A good first step is to validate with the W3C tools and try to stay 100% standards compliant. But you're doing that already, I'm sure. : )

Adobe has a neat "Meer Meer" application that's coming... sometime. There is also SuperPreview.  Both of these allow cross-browser design using nothing more than whatever web-browser you have installed: they call the real browsers on the server-side, take snapshots, and serve you the images.  The coolest feature here is that you can overlay two browser's snapshots and have the differences jump out at you.  ...Both of these are unreleased, but just around the corner.  Very cool.

If snapshots are enough, you can get these cheap: $50 a month for Litmus.  You can hook these tests into your testing suite. CrossBrowserTesting has a demo video here. Nothing earth-shattering: just note that you have to do these "tests" by hand.

IBM says: "For instance, for a monthly fee, Browsercam.com gives you access to a broader range of platforms and browsers—including some mobile platforms—and these pages are dynamic, allowing you to test Ajax apps and other JavaScript-based pages. Some services, like Litmusapp, can integrate with your testing and development workflows, offering versioning support and ways to publish test results to external clients."
 
Interestingly, if you want to do these things by hand, there are free Microsoft images of common Windows versions/Browser combos for use by developers.  The bummer is that you have to renew them every so often (and only supported OS/browsers are represented), but it's a great way to test your app without spending too much money on setup.

A decent rundown of the "images of site" tools is here; (there were other overviews, not as good) we should see what the JavaScript-capable options are really capable of. If you need older versions, you'll have to install those yourself, but old IE versions are still available on the web (not from Microsoft, however).

Functional (Acceptance) Testing

For acceptance testing your Javascript, there are a few choices in the world of headless (ie: much faster) browsers.  HtmlUnit seems a usable Java "mock" browser. There's a JRuby wrapper for it called Celerity, which could get me started faster. Similar is jsTest, also in Java. Jiffy struck me as overly complex, and hard-wired to Oracle for measures. Joe Dog Siege looks like a usable, mostly command-line driven load tester.

An interesting but probably less-useful alternative is to just check the JS for problems using crosscheck, but as of now it doesn't seem to have usable documentation, so...

If you don't mind (or have a need to be) using real browsers, Selenium is the standard right now for writing acceptance tests, and it's really easy to set up.  Whether it adds overhead is an interesting question; I would assume yes based on some comments from other tool's sites. That said, A) You don't want to mix your acceptance tests with your performance tests, and B) it seems to be a standard right now, and I do recommend using it because of that.  There are some alternatives, though.  For example, JsUnit is a way to do the tests in native JavaScript, which is quite possibly useful.

Performance Testing

"Performance testing" (actually lots of sub-categories; read on) is a rather large arena, and I found this stuff quite interesting.  ; )  I found the best introduction to the topic to be this neat video from Google, which gives some definitions:
  • performance testing - how fast will it respond under load X? You need to test things at least long enough to reach a steady state. Start up slowly: don't just throw 100 users at it, build it up over a warm-up period. The machines under load periodically push performance dat (see blow) to monitoring servers, which analyze and store the data.  These monitoring servers are also monitoring themselves (so you know what the effect of storing the data is). To the extent possible, remove the network: run everything in its own little environment.
  • stress testing - at what point does the system break under load (and does it do so gracefully)?
  • load (or liability) testing - given a high load (80% of max) for an extended period of time (Google does 1 month, 72 hours is a minimum recommendation), how does it perform?  Often there are cron-related or system-level tasks (ex: log rotation) that affect performance only at certain times, so this needs to be examined.
  • benchmark - a simplified measurement that is repeatable, representing a typical customer's activity on the system.
  • profiling - When you change some aspect of the system (memory, load balancer, etc) and measure the differences in benchmark performance.
  • availability testing - when a system fails, how long does it take to come back up? Google operates under the assumption that their equipment sucks, so they will solve that problem in software.
Her reasons for testing are 1) to make honest claims to the client backed by data, 2) test hardware changes, and 3) developers can see the effect of their code over time.  NOT for functional testing.

Some other tips included: run every test at least 5 times before you measure it: inconsistency must be accounted for.  She recommends a single five-minute test run "on demand", when a developer requests it, because you get *some* information from it, but nightlies are still required.

She recommends JMeter to measure loads (but it doesn't handle JavaScript) (There is a demo of how to get started, too), or Grinder (for "APIs").

Note that you should track not only the QPS, response times, etc, but also the CPU load, memory, versions of every piece of software on the machine, and disk i/o.

Interestingly, she says that Linux is typically performant as long as you don't exceed any single resource's 80% mark.

You can buy load testing.  Prices on LoadStorm, for example, range from $30 to $200 a month. LoadRunner is another alternative (software that you use), by HP, which appears to be rather expensive... (Well, when they don't list a price, one must assume...)

BrowserStorm looks reasonable for a paid service, and has an explanation of its features in a blatant plug on Ajaxian.

Hope this helps.

Monday, March 2, 2009

AIM Chatroom Death

...So we EOL developers hang out in an AOL chatroom.  Last week, I got logged out of it, and couldn't get back in.  ...Not sure why.  I hoped the problem would just go away, but it never resolved itself.  The error I was getting was:

Couldn't join chatroom "".

An AIM service error occurred.

Error: Serv:RequestTimeout
ChatError:NoError

Useless message!  But eventually I found a page describing the problem, with the suggestion to change the AIM server port from 5190 to 443.

That worked.

Friday, February 27, 2009

Fixing "svn: Checksum mismatch"

While trying to check in some changes I made, I ran into a problem that gave me the following error:

Transmitting file data .svn: Commit failed (details follow):
svn: Checksum mismatch for '/Users/jrice/workspace/eol-dev/spec/models/.svn/text-base/taxon_concept_spec.rb.svn-base'; expected: 'e4bd7ddf1f3c78447253dbdcd4ab96f6', actual: 'edbe4f2e08778a7d4f33212a812450ec'


...At first, I really freaked out, wondering if I had some kind of virus tacking on additional information to my files or something like that. But, of course, that was just panic. In truth, this is just a bad MD5 checksum in the svn database, and only for one file.  It's kind of a pain to fix, but it's simple enough.

I found a very nice walkthrough to fix it on Chris's Glob blog. For posterity, I will re-cap it here.  Start by committing everything you can (basically, everything but the one file that is having trouble).  Here's the rest of what I did, based on Chris's walkthrough:

[models]
$ cd /tmp
[tmp]
$ svn co https://repository.eol.org/eol/trunk/spec/models
A    models/toc_item_spec.rb
A    models/taxon_concept_spec.rb
A    models/transactions_spec.rb
A    models/normalized_names_and_links_spec.rb
A    models/data_object_tag_spec.rb
A    models/hierarchy_spec.rb
A    models/user_spec.rb
A    models/factories_spec.rb
A    models/hierarchy_entry_spec.rb
A    models/name_spec.rb
Checked out revision 295.
[tmp]
$ cd -
/Users/jrice/workspace/eol-dev/spec/models
[models]
$ cd ..
[spec]
$ mv models /tmp/models.orig
[spec]
$ mv /tmp/models .
[spec]
$ svn up
At revision 295.
[spec]
$ cp /tmp/models.orig/taxon_concept_spec.rb models
[spec]
$ svn st
M      models/taxon_concept_spec.rb
[spec]
$ svn commit -m "Adding specs to taxon_concept in prep. for xml API"
Sending        spec/models/taxon_concept_spec.rb
Transmitting file data .
Committed revision 296.


That's it: as you can see, the file went up just fine after that.

Friday, January 23, 2009

Testing [warning: LONG post]

So, today, Remi (of OpenRain) and I made a presentation to the other developers about what we thought would be the best M.O. would could pick for testing our application. Without going into the "whys" behind this testing sea-change, I thought I would recap what I've learned, with Remi's help.

First, we've re-defined what to test. When our testing suite was at its low point, the answer to that question was "everything".  ...And that's what made the tests so brittle: if anyone made any change at all, the tests let them know about it. So, coming up with a clear definition of what gets tested was important to me.  The answer I chose was twofold:
  1. Those parts of the code (namely, APIs) that really should never change, and
  2. The functionality that we really need to prove works.
The latter point is made particularly clear by the argument for black-box testing. The most poignant case he makes in that video is to make the assertion that "this really doesn't prove anything", and that was very important to me.

So, with those two core definitions, we decided to take two core approaches:
  1. Unit testing
  2. Full-stack (or nearly-so) functional testing
...That is to say, no view tests: they depend too much on everything else to render properly and are too difficult to test in ways that aren't going to break when you decide to, say, rename a div. Secondly: no controller tests. As Remi put it, controller tests are huge tests, and they prove almost nothing. They are not worth their weight.

I looked into a myriad of tools for writing tests, including shoulda, context, matchy, zebra, and a review of RSpec.  In a nutshell:

Shoulda is really cool.  The assertion that one line of Rails code should require one line of testing code is... superb. I'd like to work under that assumption, and write our own custom assertions to facilitate this. But shoulda also relies heavily on an application being very much in line with the Rails Way, and our app, because of its highly complex, stand-alone database is not particularly so. I feel much of the power of shoulda would be lost on us.  So, rather than learn this system, we should just pass it by.  Fortunately, RSpec has made some changes to their system that make shoulda-macro-like custom assertions and "one-liners" much easier.

Context and matchy are also neat, in that they give you much of what RSpec and shoulda give you, with much less baggage. I like that concept! But when I actually installed the code and tried it out, a lot of the things that "just work" in RSpec just... didn't.  Rather than wrestle with it for more than a day, I decided these tools are... well... too light for our purposes.

Zebra impressed me the most.  I think we'll all be writing tests in this style (example: expect { @my_model.to be_invalid() }) in the near future. ...But at the moment, one needs to define too many of those assertions one's self, and thus I don't think is is really mature enough for us to start using without serious investment in time.  ...which we just don't have.

In the end, I decided it was wisest to stick with Rpec.  It's got everything one needs to write excellent unit tests... and then some.  So, yes, it comes with some overhead and baggage, but on the other hand: the things you need to "just work"... do.

That said, the style of testing we'll be doing needs to change. As I said earlier, we should focus on testing only those parts of the code that really shouldn't change.  For example, we expect a taxon_concept to have a common name, which comes from a particular part of the database and defaults to the scientific name when that entry isn't found.  These are things we can test, and we can do it without stubbing the tar out of every method that ever gets called to create the end result.

Consequently, we will be hitting the database to run tests.  And this makes me a little sad, because I know how slow testing can be when it's dependant on the DB.

To alleviate some of this, we are going to try and stop using fixtures in favor of factory_girl.  This has several advantages:
  1. Fewer models than with fixtures. If you need to test a "special" model, you instantiate one with the special feature.  It's not there in every other test that doesn't need it. Hopefully, this will cut down on the time it takes to prepare for any given test.
  2. Easier to define (DRYer) than fixtures. There's (nominally) one factories.rb file sitting somewhere, with all of your models defined in a rather succinct syntax.  Compare this to the 50+ YML files sitting in a directory.  If you tweak the relationship between two models, you're not doing a search-and-replace on several YML files, you're changing one definition in one place.
  3. More robust than mocks. The problem with mocks is that you need to stub each function that gets called on them, and this can be quite expensive (in developer's time) and non-dry, if not done carefully.
  4. More coverage of class behaviours. So, when you call that name() method which bounces all over creation to find your common name, you're flexing the muscles of all the pieces involved to make sure they work.  Of course, this "feature" comes at the price of less isolation of code. ...and isolated tests is one of the hallmarks of RSpec.  But I think it suits our project better to take the coverage. Plus, our project makes heavy use of find_by_sql, which may otherwise go untested.
  5. Easier to instantiate than mocks. Our "top-level" model, a taxon_concept, relies on around 20 other models to actually work. With RSpec mocking, I had to create each of those mocks and tie them together.  The resulting code was very, very ugly.  Yes, I probably could have cleaned it up, but I don't think I could have gotten it nearly as succinct as factory_girl's syntax.
Factory_girl is a rockin' module.  I think everyone should be familiar with it.  (And, yes, I am aware that there are a number of viable alternatives with the same underlying behaviour.  But f_g seems most popular and least cluttered.)

...So we still have the problem of proving that the website works.  This was a problem with RSpec.  Because of its fantastic isolation of testing, one was never really sure if the whole stack was going to behave properly. I spent the vast majority of my time in the past 1 1/2 weeks trying to solve this problem of proof that the damn site actually does what you want.  There are plenty of solutions out there, but personally, I found most of them clunky. ...at best.

Enter webrat.  This is a package that makes visiting a site as simple as... well... visit(url).  And you can click around, fill in forms, and all of those similar things with other, very simple syntax.  Example from the current homepage:

  class SignupTest < ActionController::IntegrationTest

def test_trial_account_sign_up
visit home_path
click_link "Sign up"
fill_in "Email", :with => "good@example.com"
select "Free account"
click_button "Register"
end

end
...Isn't that slick? ...This makes writing good functional tests a piece of cake. And, while we could easily run these kinds of tests with RSpec, I decided that, because we operate on a user-story-centered style of implementing features, I thought we could also adopt cucumber, which is another really slick wrapper around user stories.  Basically, you write tests in plain english, using Given / When / Then blocks, and write some ruby code to match your plain-english assertions and turn them into webrat (or some other) full-stack tests.

I expect some resistance to cucumber. It feels a little... hokey to write tests in plain english, then parse them in ruby... but in practice, I have found the technique to be very readable, very usable, and surprisingly minimal (in terms of the amount of code).  Assuming there is sufficient buy-in for this, I actually believe it will turn out to be a really cool, really reliable way to, as I said, prove that the site works.

Of course, all of this is academic as of this afternoon.  We'll see how things pan out in the next week or two.  Some of these ideas may not be well-conceived, or may turn out to be ill-applied to our particular codebase. I'll keep open-minded about it. But I'm also really excited to at least try and get all of this to fit together nicely.

I actually rather enjoy writing tests, and I think these changes will make tests more fun, more useful, and more productive.  We'll see.

"Errors running test:units!"

Sounds nasty, doesn't it?

Worse, there's little clue, even with --trace on, as to why it failed.

In my case, however, I poked around a bit and noticed that my "test/test_helper.rb" file was missing. (Long story as to why.)  Restoring it fixed the "Errors running test:units!" problem.

Hope that helps someone else out there ('cause I couldn't find a good explanation online)...

Thursday, January 22, 2009

config.gem 'rspec'

I finally got rspec to install using config.gem (in Rails) with these lines:

  config.gem 'rspec',       :lib => 'spec'
  config.gem 'rspec-rails', :lib => false # Note this hack keeps it from failing on this gem.


...That allowed me to install/unpack them.  Which is all well and good.  But when I ran my specs, it failed, complaining about an "undefined method `evaluate_value_proc' for class `Spec::Matchers::Change'". Looking around, the claim was that this error was caused by "incompatible versions of rspec and rspec-rails". But that didn't jive with the versions I had installed.

So I commented out my two lines, above.  ...At least I had them in vendor/gems, (and my "script/generate rspec" bootstrap command ran fine, earlier).

With the gems commented out, everything worked.

[shrug]  Go figure.