I saw a tweet about — well, not writing tests, strictly, but taking tests as part of applications for jobs or contracts. But I choose to make this about writing tests.
I hate them, and remain salty about them.
The logic behind them seems to run thus:
- We want to hire the right people
- Evaluating writers is hard
- We want to know they can write our specific thing
- So as well as asking for a portfolio, we’ll get candidates to compete some bespoke work assignment so we can gauge their expertise and make sure they can write OUR thing
My problems with this are manifold.
- Most competent games writers can write most game forms/styles/genres. Some will be better at some and worse at others, granted, but I feel like you’re not actually solving a problem with these tests most of the time. (Or: if the thing you are trying to test for is 'baseline competence at their craft', this is the wrong way to go about it.)
- Most of the time, you’re not going to get better or more information than you get from a portfolio, unless your test is very well put together and carefully thought through.
- It generally amounts to the hiring company trying to mitigate risk (of hiring the ‘wrong’ writer) by gathering additional information. Which is not inherently bad! It’s actually sensible! The problem is that a) I’m sceptical about the quality of the information received and b) they do this at the expense of the candidate -- by having them bear increase risk and invest more time and labour instead.
This is not to say that these tests are never useful. I’ve seen some that are, and have, once or twice, enjoyed the experience. But the vast, vast majority I’ve engaged in or seen colleagues engage in have been colossal wastes of everyone’s time.
Here are some things I’ve had, or seen come out of writing tests:
- Company just ghosts candidate
- Company goes silent for long period before announcing that, due to strategy changes, they’re not hiring for this position any more
- Company tells candidate their submission isn’t something they could ship outright (which is an absurd success metric for a cold-read writing assignment meant to get more info about the candidate)
- Company returns feedback on writing test, parts of which contradict the original brief or treat as essential criteria which were absent from the original brief
- Company progresses candidate, before later taking issue with quoted rates being ‘too expensive for their budget’, despite their being a known/knowable issue before moving ahead with a writing test
Many of these aren’t about the tests per se — they’re general problems with hiring processes. But their bullshit factor is greatly compounded by the fact that the company has had writers invest hours or weeks of their time speculatively, leading to an absurd amount of wasted effort. There's already a great power asymmetry in these processes; this makes it so much worse.
The instance I am still most personally salty about is a company which told me they were finding too many candidates weren’t ‘at the standard they wanted’ when it came to do writing tests after completing several interview rounds. This was losing them too much time doing interviews, so they moved their writing tests earlier in the process, before interviews. This means they were giving it to dozens of people, almost entirely speculatively. And this was a writing test which demanded, at minimum, several days of work. This is practically a moral hazard issue — how much of people’s time we’re they wasting to ‘solve’ a problem of their own by pushing the cost and risk onto other people?
‘Solve’ in scare quotes because I can’t imagine that this helped them much. Surely they were burning far more time on this than just doing interviews. Going over writing tests is time-consuming!!
And, some additional salt: given their offensively overscoped test and their approach to it, I’m not remotely surprised that they were having that original 'issue' with people not delivering the results they wanted after interview. I suspect that had a lot more to do with the test and what the company was looking for than any strict deficiency in the candidates. I found the whole process so incredibly sour, and it reflected so poorly on the company and its processes that I'd quietly warn away anyone considering working with them.
(This also gave me the thought that bad hiring processes will tend to persist because they tend to select, naturally, for those candidates who can endure at least that level of bullshit without giving up. Assuming a company's hiring processes are reflective of the company more broadly, if you can get hired by them, you've passed through the flaming hoops of process required to get through the door in the first place. Which means your tolerance is at least that high. An idle thought and a sweeping generalisation/rule of thumb, at best, but I think it has nonzero explanatory power for the interia of bad processes.)
Salt aside, how would I fix this? Well, I’m mostly just going to refuse writing tests in future as flat policy, with the specific caveats below. They waste my time, and where I have the luxury of choice, I don’t want to work for anywhere that starts out the relationship by wasting my time. But, that aside, here are some things I'd be looking for:
- I am paid for my time. This doesn’t strictly make the test better, and it would basically be unheard of to get actual full working rate for a writing test. But it’s a risk mitigation measure — it means my time is less wasted and the company is willing to actually put their money where their perceived need is, which reduces asymmetry.
- The test seeks to solve some specific problem which is not just ‘can you write?’ (I can) or ‘can you write our specific thing?’ (which this is not a meaningful way to evaluate). Too many companies seem to give writing tests just because they can, or because they feel they’re supposed to. And they’re often the worst ones, because they’re unfocused or overscoped. The test, of course, then has to actually be well designed to support this objective, but this is ‘Step One’.
- The test should be scoped to be comparable to the amount of time I’d spend on an interview. If I can genuinely do it in a handful of hours, I’ll be less grumpy about it. But that’s hard to get right, because if it’s technically achievable in a few hours – but you’d produce something way better by spending, say, four days on it, then you have to start second-guessing whether other people are going to actually stick to the brief. Put another way: if people feel they can maximise their chances of getting the gig by overworking (even if that contravenes the strict brief), they are going to overwork (which hurts them and, potentially, those who don't overwork by comparison). The hiring company cannot abrogate the responsibility of creating this dynamic.
- The test should not produce any material the company could use, and explicitly not sign over any rights for them to use anything so produced. Maybe if they’re paying full rate for the test, they can make an argument for this, but then I come back to: are you really doing a writing test or just outright hiring someone for a short engagement? Because those are different things with different parameters, and trying to do everything all at once is to the detriment of all of them.
- The test should be given to as few candidates as possible. Waste the minimum of aggregate time.
If several of these things were true, I’d consider doing a test, even now. But generally, I think they are a bad investment for all involved unless the company practises great care and attention.
You can, in fact, gauge someone’s writing ability by samples and past work. If you lack that expertise, hey, consult with someone who does have it to help you evaluate candidates. A writing test will give you a little more insight into what someone is like to work with — but is a terrible instrument for that. Figure out, above all what problem you are actually trying to solve and what information you are trying to gather and find some better way, because it’s almost definitely not best solved through a writing test.
Something I didn't get into (because I wrote this in bed at 6am while not sleeping) but that is also salient is: writing tests also affect what candidates you're getting in the first place. This is both in terms of 'busy, well-qualified people don't have three days to spend on your test' and 'many people have personal circumstances that preclude this' e.g. kids, financially strained, chronic health conditions around other commitments, and so on – and that second group disproportionately includes marginalised people. Beyond all the other moral considerations, you'd better be certain your writing test is adding value to your hiring process if you're willing to limit your applicant pool in this way.