It's never clear to me how proposals to Federal agencies are actually reviewed. I don't mean the mechanics of the process: initial screen for compliance, etc., distribution to reviewers, email or panel discussion culminating in overall rating/ranking, evaluation and recommendation by program manager, and so forth. But hitting the moving target of the panel review is somewhat of a mystery to me. I'd like to believe (or would like to think I believe) that a critical and balanced assessment of the scientific merit of each write-up is made. Comments I get back, though, suggest otherwise.
There's the "didn't she read paragraph 3 on page 5?" moment, the "reviewer 1: scope too limited; reviewer 2: scope too broad" moment (and all derivatives, e.g., "reviewer 1: computational aspects are weak, experiments great; reviewer 2: computational aspects great, experiments weak" moment). And the "he's clearly got a chip on his shoulder about that approach/system/method". Then there's the "based on the funding record, the panel and/or program manager clearly want to fund A, while the synopsis says B" moment. The "they funded *that*?" moment, and on and on...
A personal favorite: the crushing moment of understanding that last year's panel was completely different than this year's, and hence everything they asked for in last years comments is exactly what they're knocking you for this year.
I just can't decide, sometimes, whether I should spend my efforts trying to get better at hitting the targets... or just fire more arrows betting that luck is better than skill.
That said, I think I've final got a formatting/document preparation workflow that I like. Or, well, at least a working process that enables me to use LaTeX! Consider this a teaser for when I get around to writing it up.
No comments:
Post a Comment