Jun. 8th, 2013 11:19 am
You've heard this rant before
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
On one of my most frequently visited fora, someone is trying yet again to come up with an "objective" definition of "a language". I've seen this discussion so many times at this point I've got my canned arguments all ready to go: "languages" are just high-order abstractions; judgments of "similarity" are hopelessly subjective; lexical comparisons depend on a series of increasingly arbitrary definitions (e.g. which terms you choose to compare, how you measure vocabulary size, what definition of "word" you're using). But it never seems to matter: people seem to feel very strongly that "languages" are concrete things rather than convenient social fictions and, therefore, can be defined scientifically.
Of course, the real problem is that most people don't understand what that word means. Someone linked to this laundry list of intelligibility studies and asked if we thought it was "accurate". How could anybody tell? There's no methodology given at all, so we haven't the faintest idea what the researchers actually measured. It's like asking, "Is it true that when you're embarrassed you're also 27% angry?" Yeah, sure: choose the appropriate criteria for defining "embarrassment" and "anger" and the appropriate method for quantifying the data and you can get that percentage. Does that make it "scientific"? No.
But present your results in numerical form and most people will simply accept them. The first response to that blogpost begins, "I don’t dispute the scientific validity of those findings, but..." Well, then, you're a fool. Worse, you're a cargo cultist who believes the mere presence of numerical data is evidence of "science". For all you know, someone just banged out those percentages on a pub table over a beer or two. Or maybe they abstracted them from a single reading comprehension test. Who cares? One way or another, they don't tell you what you think they're telling you.
Of course, the real problem is that most people don't understand what that word means. Someone linked to this laundry list of intelligibility studies and asked if we thought it was "accurate". How could anybody tell? There's no methodology given at all, so we haven't the faintest idea what the researchers actually measured. It's like asking, "Is it true that when you're embarrassed you're also 27% angry?" Yeah, sure: choose the appropriate criteria for defining "embarrassment" and "anger" and the appropriate method for quantifying the data and you can get that percentage. Does that make it "scientific"? No.
But present your results in numerical form and most people will simply accept them. The first response to that blogpost begins, "I don’t dispute the scientific validity of those findings, but..." Well, then, you're a fool. Worse, you're a cargo cultist who believes the mere presence of numerical data is evidence of "science". For all you know, someone just banged out those percentages on a pub table over a beer or two. Or maybe they abstracted them from a single reading comprehension test. Who cares? One way or another, they don't tell you what you think they're telling you.
no subject
At the end of the week I handed in to the somewhat bemused professor a sheet of stats, rather than the essay he was expecting. I got it back with the single comment, "This looks convincing, I think."
The thing was - never mind the fact that the test I'd used wasn't really adequate or that the professor didn't understand the maths - I'd made basic errors in the Old English: half the forms I'd counted were actually datives, something he was well qualified to spot if he hadn't been blinded by the spurious figures.