A response to a paper by Samuel A. Hardy
In a recent paper in Cogent
Social Sciences,[1]
Samuel A. Hardy (2017) has attempted a wide-ranging comparison of the efficacy
of different kinds of regulations of metal detecting. In it, he attempts to
estimate the number of metal detectorists active, whether lawfully or
illegally, in several different European countries, Australia, Canada, New
Zealand, and the USA.
He also attempts to estimate the ‘damage’ caused by their
removal of artefacts ex situ. This, he does by first estimating the average
amount of hours per year searched by the average metal detectorist, and then
estimating the number of significant artefacts found per hour of searching. By
multiplying these estimates, he arrives at the estimated number of significant
artefacts removed ex situ per year in each of the examined countries, which he
takes to be the ‘damage’ that is caused.
These estimates he then compares transnationally, and
arrives at the conclusion that comparably permissive or liberal regulatory
regimes are ineffective in minimising harm to the archaeological heritage.
Methodical and arithmetic flaws in Hardy’s (2017) paper
While I appreciate Hardy’s attempt, there sadly are serious
flaws both in his methodology and arithmetic, and thus also his conclusions.
Since he specifically quotes a recent paper on the matter that I co-authored
(Karl & Möller 2016) as the inspiration for his method of estimating the
number of metal detectorists active in different jurisdictions, I feel the need
to respond directly to his paper.
Remarks on our methodology
The methodology Möller and I used (also see “An
empirical examination of metal detecting”) in the paper cited by Hardy as
an inspiration for his is based on a quite simple principle: the direct
comparison of like data with like.
In our paper, we specifically explained why such a direct
comparison, rather than a comparison of estimates, is essential for debates
about regulation of metal detecting: estimates of the total number of metal
detectorists active in any particular country can and do vary widely,
frequently by an order of magnitude or even more (see e.g. for Austria the
range from as little as 250-500 to as many as 10,000+ in 2010, Karl 2011, 120
fig. 5; cf. the range of between as little as 9,000 to as many as 250,000 for
England and Wales, Hardy 2017, 15).
Comparing any estimates picked from these ranges with each
other transnationally is obviously meaningless: if one compares, per capita,
the lowest estimate for Austria of 250 active metal detectorists (1 metal
detectorist per 34.340 inhabitants) with the highest estimate of 250.000 for
England and Wales (1 per 232 inhabitants), then the latter obviously has 148
times as many active metal detectorists than the former. If, on the other hand,
one compares the highest estimate of 10.000+ for Austria (1 per 858 inhabitants)
with the lowest of 9.000 for England and Wales (1 per 6.432 inhabitants), the
former obviously has 7.5 times as many active metal detectorists than the
latter.
Thus, depending on which ‘estimates’ from these ranges one
compares, one can arrive at totally opposite conclusions. Since arguments can
be found for any of these estimates to be ‘reasonable’, comparing any such
selected ‘estimates’ is extremely unreliable: one could compare ‘estimates’
which in one case represent as little as a few percent of the actual number of
active metal detectorists in one country and in the other represent a multiple
of the actually active community. Such comparisons, therefore, do not allow to
arrive at any reliable conclusions.
For this reason, we proposed to compare data from the same
kind of sources directly: of metal detecting internet discussion boards. While
there is, of course, no guarantee that they actually do, there are several
reasons as to why it would neem likely that the membership figures of such boards
are representing – at least roughly – the same fraction of the community
communicating about metal detecting in otherwise comparable countries. We have
argued that this is for the reason that they fulfil similar needs of the
respective communities in countries with similar societies; which makes it
exceptionally unlikely that e.g. 100% of all active German metal detectorists
are members of the largest subject board in their country, while less than 10%
of their English and Welsh peers are subscribed to the largest equivalent board
in Britain.
If that assumption that the membership of ‘national’ metal
detecting discussion boards on the Internet represents – at least roughly – the
same percentage of the actual number of active metal detectorists in their respective
country is true, the board memberships can be directly compared, regardless of
what the actual number of metal detectorists in each country is. This is for
simple mathematical reasons: if a known figure A represents a particular
fraction F of another, unknown figure X, its ratio to any other known figure B
which represents the same particular fraction F of yet another unknown figure Y
will always be the same as the ratio between the unknown figures X and Y.[2]
In short: We directly
compared the same kind of data, collected with the same method, in all
countries we examined.
Hardy’s methodology compared to ours
Claiming that his paper’s methodology is “[f]ollowing a novel method of open-source
analysis of detecting communities in Austria, Germany and the United Kingdom
(Karl & Möller 2016)” he states that “searches were conducted to identify data on the size of detectorist
communities” (Hardy 2017, 8). Since Hardy intended to “analyse the impact of detecting”, he deemed it necessary in this
context to first “estimate the number of
detectorists in any territory” (ibid.).
Yet, the very point of our methodology is to avoid using
unreliable ‘estimates’ in transnational comparisons, and especially avoiding
unreliable estimates of overall metal detecting community sizes for any
analysis, but rather relying on like data. Hardy, however, disregards that
second element that is crucial for our methodology to work as well: he does not
limit his comparison to membership figures of the same type of social media –
as we did with restricting ourselves to discussion boards only – but also
liberally uses other kinds of social media data in some of his analysis, like
the membership numbers of Facebook groups (e.g. “Metal Detecting Australia”,
Hardy 2017, 10, 46).
In fact, as will be shown in greater detail below (also see
table 1), he doesn’t even limit himself to social media group membership data
only, but uses entirely different data, particularly for estimating numbers of
active metal detectorists in England and Wales, and the USA (Hardy 2017, 15-17,
20-22). That any of the data he uses to establish his estimates for these
countries were comparable like for like with any of the social media data used
in his study is not even imaginable, let alone a ‘reasonable’ assumption.
As such, unless Hardy meant that using Google searches for
finding data on the internet on numbers of metal detectorists in different
countries is the “novel method of
open-source analyisis” he is following, whose development he attributes to
Möller and I, he has not followed our method at all. Rather, he has utterly
misunderstood and perverted our method.
This is already highly problematic in itself, because we
(Karl & Möller 2016; see also “An
empirical examination of metal detecting”) designed and used our method for
a very specific purpose: to empirically test the core prognosis which can be
deduced from the theory of the preventative effect of restrictive regulation of
metal detecting; that countries with restrictive regulation have fewer metal
detectorists per capita than countries with more liberal or no regulation of
metal detecting. In other words: we designed our method for deductive
hypothesis-testing, that is, to evaluate whether a particular statement is
demonstrably false or is being confirmed by the particular data analysed.
It thus cannot produce meaningful results if (mis-) used in
the way that Hardy ‘applied’ it, that is, for abductive comparisons of ‘estimates’.
Using it for the latter does not produce an empirical examination of anything,
but just produces empirical data, as any Google search does in that it turns up
information existing on the internet (whether that information is true or
false). Hardy, however, does not seem to understand that.
In short: Hardy
transnationally compares on different kinds of data, created using different
methods.
Hardy’s methods of ‘estimating’ metal detectorist numbers
While he claims (Hardy 2017, 8) to proceed like we did in
our study (Karl & Möller 2016), he in fact does not. Rather, what he does
throughout is to create ‘estimates’ for the number of active metal detectorists
in each of the countries he attempts to compare. He then compares those
‘estimates’, as well as basing all his further comparative calculations on them.
In other words, his study is based on the very fundamental flaw Möller and I
tried to avoid by developing our method: he transnationally compares
‘estimates’, but without actually ensuring that the numbers he compares are
actually transnationally comparable.
He does not even compare ‘estimates’ that have
been arrived using the same kind of data and the same methodology. Rather, he
compares ‘estimates’ which seem ‘reasonable’ to him, based on entirely
different sets of data; data which, on top of everything, has been manipulated
differently by him for different countries. Table 1 below gives an overview of
the data used, and how it has been manipulated. It also shows the different
percentages by which he has either in- or deflated data from different
countries.
The ‘other’ countries
For the 10 countries other than England and Wales, and the
USA, that he examines, Hardy (2017, 10-20) proceeds in a reasonably similar
fashion, one that is reminiscent of the way we (Karl & Möller 2016)
proceeded in ours, even if with some twists.
For those 10 countries, he uses as his baseline data the
membership figure of the respectively largest (2nd largest for
Scotland, see Hardy 2017, 19-20 for the rationale for this) social media group
– whether that is a discussion board or Facebook group – he managed to identify
by internet searches. The number so determined, he then reduces by 6.58% based
on a poll undertaken by a user with the screenname of ‘Marc’ (2004) on the
largest American ‘treasure hunting’ discussion board (http://www.treasurenet.com/forums/forum.php).
This is based on Hardy’s assumption that an equal percentage of users of metal
detector discussion boards and Facebook group are inactive, despite the fact
that a poll on the Scottish board he takes his baseline data from for Scotland
indicated a 20.83% rate of ‘metal detecting inactive’ members. For some
countries, that is, Belgium and Denmark, he then deducts from that the reported
numbers of ‘licit’ metal detectorists, to be able to also create an ‘estimate’
of the number of ‘illicit’ detectorists (which is important for his later
calculations of ‘damage’ done by those two different groups).
Still, the overall estimate for each of these 10 countries
is his baseline figure, derived from the membership numbers of an online board
or Facebook group, reduced by 6.58%. This is what he calls a ‘low estimate’ for
these countries, disregarding in its entirety not just the possibility, but
indeed the likelihood, that there may indeed be a considerable number of metal
detectorists which are active in these countries, but have not subscribed to
the respectively largest social media group for it. I will return to this point
further below.
In fact, based on his own argument, it is not just a ‘low’,
but the minimum ‘estimate’ of how many metal detectorists must be active in
each of these countries: it is the number of people who have subscribed to the
largest social media group on this subject in each respective country, minus
those who have to be assumed to be ‘inactive’. Since this leaves only such that
have to be presumed to be active metal detectorists, this gives a minimum number of metal detectorists who
must be presumed to be active in each respective country.
Another kettle of fish: making up numbers for England and Wales
For England and Wales, he proceeds entirely differently to
arrive at his ‘estimate’ of the number of active metal detectorists in these
two countries (which he treats as one unit of assessment for the purpose of his
comparison).
Here, he first ‘estimates’ the numbers of ‘licit’ metal
detectorists. As the baseline data for his ‘estimate’ of the number of ‘licit’
detectorists, he uses the already estimated figure of c. 15,000 members of the
National Council for Metal Detecting (NCMD) given by Bailie and Ferguson (2017,
14) for the whole of Britain, and reduces this by the 313 reported in the same
source as ‘Scottish’ members of NCMD. Leaving aside for the time being that
this is based on entirely different kind of data than he uses for the ‘other’
10 countries; and assuming that all NCMD members (who, after all, pay a membership
fee) are indeed active metal detectorists; the resulting figure of 14,687 would
be the minimum number of metal
detectorists who must be presumed to be active in England and Wales.
Thus, arguably, this would be the figure that could be
transnationally comparable to the figures he established for the 10 ‘other’ countries;
even though of course there is the issue that there is no guarantee whatsoever
that the minimum numbers of metal detectorists who must be presumed to be active
in all these countries are actually representing roughly the same percentage
each of the actual number of metal detectorists who are, and thus even a
transnational comparison of these minimum numbers would be seriously
methodically flawed. But at least, it would be comparing ‘minimum estimates’
with ‘minimum estimates’ across all compared countries.
Yet, that figure is not the one Hardy (2017) then uses as
the ‘low estimate’ of the number of metal detectorists active in England and
Wales in his comparisons and further calculations. Rather, he inflates that ‘minimum
estimate’ by another assumed 9,710 additional ‘licit’ metal detectorists, based
on Thomas’ (2012, 58-9) result that at commercial rallies, 39.8% of
participants stated that they were not affiliated with ‘metal detecting clubs’.
Hardy here assumes that those participating in these rallies who are members of
NCMD would have stated that they were members of a ‘metal detecting club’. Yet,
it is entirely unclear as to whether the NCMD can be, and indeed would be
considered by its members, a metal detecting ‘club’. Indeed, NCMD both has
members in and not in NCMD-internal ‘clubs’ (https://www.ncmd.co.uk/membership/uk/,
12/5/2017), implying that at least some of these would not consider themselves
to be affiliated with a ‘club’ just because they are NCMD members.
At any rate, the figure of now 24,397 that Hardy uses from
then onwards as his ‘low estimate’ for the number of ‘licit’ metal detectorists
in England and Wales is inflated by c. 78% (accounting for the c. 66% by which
the estimated membership of the NCMD is inflated, and the 6.58% by which the
‘low estimates’ for all other 10 countries have been deflated) compared to the figures
he uses for the 10 ‘other’ countries.
Yet, Hardy doesn’t stop there. Rather, he proceeds to
separately ‘estimate’ the number of ‘illicit’ detectorists, by first referring
to the fact that a farmer in Suffolk claims to have caught ‘about 50’ (Gooderham 2009) ‘nighthawks’ on his property over the
course of several years (Hardy 2017, 16), even though in 2008, it only seems to
have been three (Gooderham 2009). The source – and he specifically stresses
that he only used ‘demonstrably reliable
sources’ (Hardy 2017, 10) – for this is the vague recollection of that
farmer, reported in the East Anglian
Daily Times. This number of 50, Hardy then scales up to a nation-wide ‘estimate’
based on little more than the assumption that while these 50 probably weren’t
all from Suffolk, not every ‘illicit’ detectorist in Suffolk would have been
caught by this farmer or even only detected on his land, and thus the 50 could
simply be multiplied by 70 (for the remaining 48 counties of England and 22 of
Wales), to arrive at an ‘estimated’ number of 3.500 ‘illicit’ metal
detectorists active in England and Wales (Hardy 2017, 17).
These he then adds to the ‘estimated’ 24,397 ‘licit’ ones, despite there being a proven overlap between ‘licit’ and ‘illicit’ metal
detectorists; which Hardy discounts for the reason that ‘if this overlap was complete, it would suggest that 14.35% of
detecting hobbyists were detector-using criminals’ (Hardy 2017, 17), and since
his ‘study’s estimate for the number of
licit detectorists is compatible with cultural property protection officials’
as well as metal detectorists’ (Hardy 2017, 17), making it appear ‘reasonable’ (Hardy 2017,17) to him.
Thus, he simply adds his ‘estimates’ for the number of ‘licit’ and ‘illicit’
detectorists active in England and Wales up to arrive at his overall ‘low
estimate’ for England and Wales. Thus, in Hardy’s methodology, if one is a
member of NCMD or an unaffiliated metal detectorist participating in detecting
rallies, one cannot also be a ‘nighthawk’.
Hardy’s final ‘low estimate’ of 27,897 metal detectorists
active in England and Wales thus is certainly not a ‘minimum estimate’ like
those for the ‘other’ 10 countries he then transnationally compares it to.
Rather, this ‘estimate’ is one of the actual
numbers of metal detectorists Hardy believes to be active in England and
Wales.
Compared to the ‘minimum estimates’ for the 10 ‘other’
countries, this ‘actual estimate’ is inflated by a full 103.32% (if accounting
for both the 89.93% inflation of the English and Welsh baseline figure and the
6.58% deflation of all others), or slightly more than double. Given this
manipulation of the figures he then ‘transnationally compares’, it is hardly
surprising that England and Wales comes up second in Hardy’s (2017, 23) per
capita ‘league table’, and first in his per square kilometre ‘league table’, of
active metal detectorists.
A short digression: the percentage of metal detectorists subscribing to discussion boards
Interestingly, Hardy (2017, 15) almost completely disregards
that our study (Karl & Möller 2016, 217) had found that on 2/3/2015, the
largest UK metal detecting discussion board only had had 7,331 members, a
number that has since (as of 11/3/2018) risen to 9,059.
Using the reasonably current figure, this would, using the
same transformations as Hardy (2017, 10-20) applied to the comparable data he
used for the ‘other’ 10 countries, give a transnationally comparable figure (that
is, a figure arrived by subjecting the same kind of data to the same
methodology across all compared countries) of only c. 8,463 active metal
detectorists in England and Wales who had subscribed to the largest UK metal
detecting forum. That figure, assuming Hardy’s ‘estimate’ of c. 27,897 is a
correct estimation of their actual number, would mean that only c. 30% of all
active metal detectorists in England and Wales would be subscribers of the
largest UK discussion board.
This is, of course, not just perfectly possible, but quite
likely: it must be assumed that not every active metal detectorist subscribes
to an online discussion board for metal detecting. Also, a significant segment
of those who have will not have subscribed to the largest one. There are, after
all, several such boards of varied sizes active in the UK (see Karl &
Möller 2016, 217 for an overview of all boards and their membership figures as
of 2/3/2015), and thus, invariably, there will be some who subscribed only to a
smaller one. Thus, it would seem exceptionally unlikely that any of the largest
‘national’ metal detecting discussion boards would represent 100% of the
community of active metal detectorists in any country. Rather, any social media
group membership must be assumed to represent at best a fraction of the overall
number of metal detectorists active in the country the respective group serves.
In fact, the English and Welsh figures collected by Hardy (2017, 15-6) prove as
much.
One indeed must assume, as Hardy does, that there are likely
at least as many active metal detectorists in the UK as NCMD has members. After
all, NCMD members pay a membership fee, and it seems rather unlikely that many
individuals who are not active metal detectorists would do so, since they would
gain hardly any benefits from their membership. Thus, taking the figure of c.
15,000 reported by Bailie and Ferguson (2017, 14) to be correct, there must
indeed be at least the c. 14,687 active metal detectorists in England and Wales
that Hardy (2017, 16) calculates. Yet, the largest metal detecting board in the
UK has c. 5,628 fewer members than that; members which hail from all over the
UK, and not just England and Wales. Thus, even if one were to disregard Hardy’s
‘estimate’ of c. 27,897 active metal detectorists in England and Wales
completely, and just go with the plain NCMD membership figure, the largest
board still does not represent 100% of all active metal detectorists, but just
a fraction of their overall number.
Thus, Hardy should have realised that, if in England and
Wales, the number of subscribers to the largest ‘national’ metal detecting
discussion board was only about one third of the ‘estimated’ overall number of
active metal detectorists in these countries, the same or at least something
very similar would also apply in all other countries: that the largest
‘national’ board would not represent anything near 100% of the community of
active metal detectorists, but only some fraction of it.
That, however, rules out the ‘transnational comparison’
Hardy (2017, 10-23) attempts between the 10 ‘other’ countries for which his
‘low estimates’ are based on board and social media group membership, and his
‘estimates’ for England and Wales. After all, for the 10 ‘other’ countries,
Hardy (2017, 10-20) takes the membership figure of the respectively largest
‘national’ board or social media group to represent c. 107% of the number of
metal detectorists active in each of these countries. For England and Wales, on
the other hand, he knows that if his ‘estimate’ of active metal detectorists
were correct, the membership figure of the largest English and Welsh board
would represent only about a third (or indeed, using the figure of 6,250 he
gathered by working with the data we had collected on 2/3/2015, only c. 22.5%)
of his ‘estimated’ number of active metal detectorists in England and Wales (see
Hardy 2017, 15).
This constitutes a major and insurmountable problem for a
transnational comparison of these figures. As outlined above, what Hardy is
trying to do is to transnationally compare the numbers of metal detectorists
active in each of the compared countries, with the ultimate goal to establish
whether more liberal or more restrictive regulation of metal detecting is more
effective to reduce damage done by this activity to the archaeological record. Thus,
what Hardy is doing is trying to establish the ratio between unknown figures, X
(the actual number of metal detectorists in country 1) and Y (the actual number
of metal detectorists in country 2). He knows other figures, A (e.g. the number
of social media or metal detecting association members in country 1), and B
(e.g. the number of social media or metal detecting association members in
country 2). He also knows that A is a fraction F1 of X; and that B
is a fraction F2 of Y. He thus is working for his ‘transnational
comparison’ with the mathematical formula:
X : Y = (A/F1) : (B/F2)
The problem is: this formula can only create a correct
result if both F1 and F2 are positively known, or if F1=F2,
that is, indeed, F is a constant, as already explained in footnote 1 about the
same formula used by Katharina Möller and I for the same kind of transnational
comparison (Karl & Möller 2016). And indeed, Hardy assumes that F is a
constant for all 10 ‘other’ countries in his comparison: he assumes that,
regularly across those 10 countries, 93.42% of the members of the respectively
largest social media group he has found in each country are active metal detectorists.
Thus, he sets X = (A*0.9342).
However, where Y is concerned, that is, England and Wales,
he does not establish this by setting it Y = (B*0.9342), but rather ‘estimates’
a value for Y using different methods that equals 455% of B. So Y = (B*4.55).
That, however, would mean that F is not a constant, but rather a variable that
can vary at least by a factor of c. 5 (455/93.42=4.87), that is, almost half an
order of magnitude.
But if F is not a constant, but differs from country to
country, then Hardy cannot set it as a constant for the ‘other’ 10 countries
either, especially not as his F1 isn’t even derived from data from
any of these 10 countries, but indeed based on data from a 12th
country. As F1, he sets the value of 93.42% he established based on
the poll by ‘Marc’ (2004), conducted on the largest metal detecting board in
the USA.
To make matters worse for Hardy’s attempt at a
‘transnational comparison’, the ‘estimate’ he arrives at for the USA (Hardy
2017, 21-2) also proves that the value of 93.42% must not be used as a constant
that can be applied to deflate board membership numbers to arrive at ‘low
estimates’ in the 10 ‘other’ countries. Hardy (2017, 20-2) gives as his ‘low
estimate’ for the USA a figure of c. 160,000 active metal detectorists. Yet,
the board from which the poll by ‘Marc’ (2004) was taken, which is the largest
of its kind in the USA, as of 2/4/2017, only had 113,967 members. If one
discounts the 6.58% of ‘metal detecting-inactive’ members of this board (based
on ‘Marc’ 2004), this leaves us with 106,468 members of this board who we can
presume to be active metal detectorists. But this is just 66.54% of Hardy’s
‘low estimate’ for the USA. Again, assuming Hardy believes his ‘estimate’ for
the USA to be correct, this proves positively that while only 93.42% of the
members of the largest metal detecting discussion board in the USA may be
active metal detectorists, the membership of that board only represents about two
thirds of the active metal detectorists in the USA.
What makes matters even worse, Hardy has miscalculated his
figure of metal detectorists active in the USA: while he ‘estimates’ 160.000,
based on his explanations as to how he arrives at this ‘estimate’, the actually
correctly calculated figure of presumably active metal detectorists in the USA
would be 1.5625 Million (more on that little nugget below). Thus, the
membership of the largest discussion board in the USA would only represent c.
7% of all metal detectorists presumably active in this country.
Either way, Hardy would have had to have known that, if his
data and ‘estimates’ for England and Wales, and the USA, were correct, the
number of members of the largest ‘national’ discussion board would be utterly
useless for any transnational comparisons: that membership figure could
represent anything between c. 125% of all active metal detectorists in any country,
and as little as 7% of them, a difference of over one order of magnitude.
If put into the transnational comparison formula, this would
not allow to deduce the ratio between X and Y by establishing the ratio between
A and B. After all, X could be any fraction F1 of A, and Y any
fraction F2 of B, with us not knowing what values fractions F1
and F2 actually have. Thus, Hardy’s equation ends up as one with two
unknown variables too many, as:
?X : ?Y = (A/?F1) :
(B/?F2)
This, however, is a meaningless equation, and thus also a
meaningless comparison. Establishing a ratio between e.g. a figure A = 10 and a
figure B = 20, to draw conclusions about the ratio between two unknown figures
X and Y that A and B represent, is meaningless if we do not know what fraction
of X A represents, and what fraction of Y B. That the ratio between A and B is
1:2 does not tell us that the ratio between X and Y is actually 5:2, because A
represents only 10% of X while B is 50% of Y. Or that the ratio between X and Y
is actually 1:20, because A represents 100% of X but B only 10% of Y. If the
ratio between A and B does not tell us anything meaningful about the ratio
between X and Y, then there is no point in comparing A and B at all.
Thus, Hardy’s transnational comparison either is utterly
pointless to start with, since he compares unknown fractions of unknown numbers
with other unknown fractions of other unknown numbers, or is fundamentally
false. If the membership data taken from discussion boards cannot be compared
directly, it cannot be used for transnational comparisons in the way Hardy does
either. If it can be transnationally compared, it can only be compared directly
and must not be manipulated as Hardy does. There simply is no middle ground
where some data can be manipulated one and other manipulated another way and the
result arrived at still be considered reliable.
More of Hardy’s ‘estimation’ methods: miscalculating figures for the USA
But let us now return to how Hardy did calculate his ‘low
estimate’ for the 12th and final country in his comparison, the USA
(Hardy 2017, 20-2), because he yet again uses completely different kinds of
data and an entirely different methodology for this than for any other of the
transnationally compared countries.
Instead of relying on any internet discussion board, social
media or metal detecting association membership figure to arrive at a baseline
figure from which to proceed, Hardy instead relies on ‘data’ about metal
detector sales figures. According to yet another one of his apparently ‘reliable’ (Hardy 2017,22) sources,
Emily Yoffe, a journalist, ‘Debra Barton
of First Texas Products, manufacturer of my Tracker IV, estimates the handful
of domestic producers sell a half-million a year’ (Yoffe 2009). Leaving
aside that I do not consider such second-hand hearsay reports by ever so
slightly partisan journalists of ‘estimates’ by sales reps to be reliable
evidence; and leaving aside that this second-hand source doesn’t even say
whether these are domestic or global sales; this provides Hardy (2017, 22) with
his baseline figure for the USA of 500,000 metal detectors sold per annum, which
he takes it to – exclusively – be domestic sales.
He then multiplies this baseline figure of 500,000 with an ‘established estimate of the consumption of
0.32 detectors per detectorist per year’ (Hardy 2017, 22), an estimate
which seems to have been ‘established’ by his own research. I will accept this estimated
rate of consumption as correct for the sake of this argument. Thus, he arrives
at his ‘low’ estimate of c. 160.000 metal detectorists in the USA.
Of course, once again, this figure is not an estimate of the
minimum number of metal detectorists which must be presumed to be active in the
USA, but rather of the actual number of
metal detectorists thought likely by Hardy to be active in the USA. Thus,
again, for this reason alone, it is not meaningfully transnationally comparable
to the ‘minimum estimates’ Hardy created for the 10 ‘other’ countries. But
that, in this case, is almost a minor issue compared to the shocking
mathematical mistake made by Hardy when calculating it.
To repeat: he multiplies
the reported annual sales figures of
metal detectors with the annualised
consumption rate of 0.32
detectors per detectorist (Hardy 2017, 22). Yet, an annual consumption rate of
0.32 detectors per detectorist means that every detectorist on average buys a
new detector every 3.125 years.
Thus, Hardy would have had to divide the annual sales
figures with the annual rate of consumption of detectors; rather than multiply
it: one would only arrive at c. 160,000 active detectorists in the USA based on
500,000 annually sold detectors if every detectorist on average would buy 3.125
new detectors per year, not only 0.32. Thus, if calculated arithmetically
correctly, the figures presented by Hardy (2017, 22) would require him to
‘estimate’ the number of active metal detectorists in the USA at 1.5625 Million.
Now, this is quite interesting, because if one looks at
Hardy’s per capita league table of the 12 countries he ‘compares’
transnationally, the USA already come first based on his ‘low estimate’ of c.
160,000, with 1 metal detectorist per 1,917 inhabitants (Hardy 2017, 23).
England and Wales, who come 2nd in this table, already have 8% fewer
metal detectorists per capita than the USA, if one were to believe Hardy’s
figures.
Yet, if one used the correctly calculated ‘low estimate’ of
1.5625 Million instead, the per capita figure for the USA would come down to as
many as 1 active metal detectorist per c. 196 inhabitants. Even England and
Wales, with its excessively liberal system and the PAS which even advertises
metal detecting to the public as something positive rather than reprehensible,
would have a whopping 10.5 times fewer active metal detectorists per capita
than the USA. Even more remarkably, the USA would also move up to 2nd
place in the per square kilometre league table that Hardy (2017, 24) also has
kindly provided, with 1 metal detectorist per pretty much exactly 6 km2.
As a country with a population density of 32.73 inhabitants per km2,
it would sit just slightly below England and Wales, with a population density
of 382.99 inhabitants per km2, and slightly above the Netherlands,
with 407.69 inhabitants per km2.
As a hobby, metal detecting would thus have to be immensely
popular in the USA, 10 times more popular than in any other country in his
comparison.
To put it rather bluntly, the figure Hardy (2017, 22)
presents as his ‘low estimate’ for the number of metal detectorists in the USA
is made up, plain and simple. It is a number which may have seemed ‘reasonable’
to Hardy, because it is somewhere in the middle of the ‘estimates’ by others he
found somewhere or other during his web searches. What it isn’t, however, is a
reliable ‘estimate’, based on actual, reliable data. The basic arithmetic
mistake in Hardy’s (2017, 20-22) calculation of his ‘estimate’, of multiplying
where he would have had to divide, just makes it more transparent than with
most other estimates that his figures are not ‘reliable’ in any way, shape or
form, but wild guesses.
Conclusions: Hardy’s fundamental mistakes
To conclude, in this short discussion it has been
demonstrated that Hardy’s (2017) article is fundamentally methodically and
arithmetically flawed.
He compares incomparable data with no regard to
transnational data comparability whatsoever. For Australia, Austria, Belgium, Canada, Denmark, the Republic of Ireland,
the Netherlands, New Zealand, Northern Ireland, and Scotland, he creates
‘low estimates’ which reflect the minimum
number of people who must be assumed to be active metal detectorists in each of these countries. His
‘estimate’ in each case is established by slightly deflating the membership
figure of the respectively largest (or 2nd largest in case of
Scotland) national metal detecting discussion board or Facebook group.
These, then, he compares with allegedly also ‘low’
‘estimates’ for England and Wales, and the USA. However, the ‘estimates’ for
these countries are not those of the minimum numbers of metal detectorists who
must be presumed to be active in them, but rather such of what Hardy believes
to be ‘reasonable’ estimates of the actual
numbers of active metal detectorists in these countries. Of those, the
‘estimate’ for England and Wales is based on the vastly inflated, estimated
membership figure of NCMD plus another inflated number for ‘illicit’ metal
detectorists derived from scaling up an uncorroborated claim by a farmer in a
local newspaper. This is even though Hardy could have used the plain number of
NCMD members as a ‘minimum estimate’ of metal detectorists who must be presumed
to be active in England and Wales, a figure which might (just about) have been
transnationally comparable to those for the ‘other’ 10 countries at least to
some extent. The ‘estimate’ for the USA, finally, is simply a made-up number
that seemed ‘reasonable’ to Hardy, but that is actually based on a shocking
arithmetic mistake which, if corrected, changes Hardy’s ‘estimate’ by a full
order of magnitude.
Of course, it is methodologically
inadmissible to compare minimum
estimates – that is, figures that represent the lower end of the scale of
possible values of a quantity, in this case of metal detectorists – for some countries with estimates of the
actual number of metal detectorists – that is, a number somewhere between
the lower and the upper end of that scale of possible values of the same
quantity – in others. Any
conclusions drawn from any such comparison are necessarily fundamentally
methodologically flawed and thus cannot be taken as a serious contribution to
academic debate.
Thus, rather than having conducted a sound empirical study,
based on a rigorous methodology, consistently applied to actually comparable
data, Hardy (2017) produced a
transnational comparison of wild guesstimates that seemed ‘reasonable’ to him,
but actually only show his bias. Rather than telling us anything about
whether restrictive or liberal systems of regulating metal detecting are more
effective in reducing damage to archaeology, his study only tells us which way
of regulating metal detecting Hardy prefers, whether consciously or
subconsciously.
The method Hardy actually uses – which has no relation
whatsoever to that used by Möller and I in our empirical examination of metal
detecting regulation – is unsuited to answer the questions he asks; and his
study is executed exceptionally badly, even at the level of basic arithmetics.
The conclusions he arrives at, while clearly popular with some partisan
factions in this debate (The Heritage Journal 2017) who share his bias, are,
thus, not trustworthy at all. Hardy’s
results and conclusions therefore must sadly be disregarded in their entirety.
Bibliography
Bailie, W. R., Ferguson, N. 2017. An assessment of the extent and character of hobbyist metal detecting
in Scotland. Edinburgh: Historic Environment Scotland.
Boghossian, P., Lindsay, J. 2017. The
Conceptual Penis as a Social Construct: A Sokal-Style Hoax on Gender Studies.
Sceptic, 19/5/2017 [11/3/2018].
Dobat, A.S., Jensen, A.T. 2016. “Professional Amateurs”. Metal
Detecting and Metal Detectorists in Denmark. Open Archaeology 2, 70-84.
Gooderham, D. 2009. Farmer’s fight against treasure hunters.
East Anglian Daily Times, 17/4/2009. http://www.eadt.co.uk/news/farmer_s_fight_against_treasure_hunters_1_193797
[11/3/2018].
Hardy, S.A. 2017. Quantitative analysis of open-source data
on metal detecting for cultural property: Estimation of the scale and intensity
of metal detecting and the quantity of metal-detected cultural goods. Cogent Social Sciences 3, http://dx.doi.org/10.1080/23311886.2017.1298397.
Karl, R. 2011. On the highway to hell. Thoughts on the
unintended consequences of § 11 (1) Austrian Denkmalschutzgesetz. The Historic Environment – Policy and Practice
2/2, 2011, 111-33.
Karl, R., Möller, K. 2016. Empirische Untersuchung des Verhältnisses der
Anzahl von MetallsucherInnen im deutsch-britischen Vergleich. Oder: wie wenig
Einfluss die Gesetzeslage hat. Archäologische
Informationen 39, 215-226. http://dx.doi.org/10.11588/ai.2016.1.33553.
Marc 2004. How often
do you metal detect? http://www.treasurenet.com/forums/voting-booth/720-how-often-do-you-metal-detect.html
[11/3/2018].
The Heritage Journal 2017. A second bombshell for Britain
from Dr Sam Hardy! https://heritageaction.wordpress.com/2017/04/03/a-second-bombshell-for-britain-from-dr-sam-hardy/
[11/3/2018].
Thomas, S. 2012. Searching for answers: a survey of metal detector
users in the UK. International Journal of
Heritage Studies 18, 49–64. http://dx.doi.org/10.1080/13527258.2011.590817.
Yoffe, E. 2009. Full metal racket. Slate, 25/9/2009. http://www.slate.com/articles/news_and_politics/
recycled/2009/09/full_metal_racket.html [2/4/2017].
[1] Cogent Social Sciences is a ‘Pay to
Publish’ Open Access Journal, which allegedly ensures high quality standards of
papers published in it by a rigorous process of peer-review. As recently
demonstrated by Peter Boghossian and James Lindsay (2017), that quality
assurance process seems to be less efficient than desired. This is also
demonstrated by the fact that Cogent Social Sciences refused to correct the
serious arithmetical error in Hardy’s calculation of the number of metal
detectorists in the USA, when I made it aware of it.
[2] If
X = A*F and Y = B*F, then A : B = (A*F) : (B*F) = X : Y; e.g. A : B = (A*0.5) : (B*0.5) = (A*0.3) : (B*0.3) = [A*2]
: [B*2], etc. As long as F is a
constant, it does not matter what the actual values of F, X, and Y are: as long
as we have values for A = (X/F) and B = (Y/F), we can reliably establish the
ratio between X and Y.
Keine Kommentare:
Kommentar veröffentlichen