Thoughts on Archaeological Predictive Modeling in the Northeast symposium (2016)

As described in my last post, I gave a talk at a symposium entitled Archaeological Predictive Modeling in the Northeast last Friday (3/11/16).  That post has a few notes, my slides, and abstract.  It was a really interesting symposium and I really enjoyed meeting new people from the Northeast archaeological community.  I hope they continue these sessions and continue to discuss these matters.

I have had a few days to reflect on the symposium and wanted to jot down a few of my thoughts.   These may come off as pessimistic, but that is only because I am optimistic in what quantitative methods such as predictive modeling can do for us in archaeology.  Perhaps a little frustrated  that we have such a long road before we get there.  I am not sure these thoughts will win me any new friends, but that is ok.  We need frank and informed discussion if we hope to maintain relevance and advance as a field.

TL;DR: skepticism + no metrics + exaggerated expectations = NaÏve Falsifiability

Post Symposium Thoughts (03.13.2016):

  1. Skepticism.  The bookend talks and general tone in this symposium were skeptical of archaeological predictive models (APM) that venture beyond a basic weighted sum model and the VT paper checklist.   We should no be skeptical of trying to do better, but we are. I advocated being very self-skeptical of our models and that simple models are great when they fit the intended purpose.
  2. A lot of what was touched on or wished for as predictive modeling is really just basic descriptive analysis of archaeological data. It is hard to have a discussion about predictive modeling specifically if we are really talking about many different kinds of analysis.  Simple descriptive and low-dimensional analysis of the data we currently have would go very far in helping us understand what is data-supported and what is archaeological tradition.  Perhaps seeing the reality of what our site information tells us will help people feel more comfortable in making inference from it.
  3. The keynote was based on a 30 year old simple statistical hypothesis test about regional site locations; it looked quite interesting.  This analysis served as the basis of studies and policy for the next 30 years, however it was framed by its author as whizz-bang New Archaeology null hypothesis testing magic of the days gone by.  Why do we poke fun at the methods of an analysis that clearly had a major impact on how settlement patterns are interpreted in the region? It was a well done study and should be treated that way. Would it not have been productive to build off this over the past 20 years with additional quantitative studies in that region?
  4. While there was some discussion of model testing with field survey, there is no discussion of testing models with independent data.  This is not too shocking (perhaps it should be), but with arbitrarily weighted summed models, you are not really assessing fit or tuning parameters, so CV error or AIC or similar can be totally side-stepped. You certainly can and should test models of the basic summed type, but it is not frequently done.
    1. While simple summed models are fine and useful (I wrote a bit about the basic version here), there are simple ways to make them better; including accounting for a variables discrimination against the background (a WOE model was briefly mentioned in the symposium), proper thresholds, and hold-out sample testing.
  5. There seems to be a ton of cross-talk about what models should be capable of.  My point was that they are only capable of what we can formalize as expectations, what our data can reasonably tell us, and what uncertainty can be incorporated into our methods.  The general vibe was that simple predictive models are supposed to solve problems, find all sites, or result in understanding of causality.  None of these things should be expected!  What should be expected is that maybe, just maybe, we can find a weak signal in the covariates that we can measure and project to other areas and achieve a reasonable error rate.  Alternatively, we can make strong assumptions, study our data, and make educated model based inference to gain a better understanding of data generating processes.  Either way it is HARD FREAKIN’ WORK!  This will not be solved with button-mashing GIS techniques.
  6. My fear is that the combination of these general feelings results in a situation where APM and inferential archaeological modeling are doomed to be no more than a niche pursuit.  Perhaps “nice pursuit” is a little strong since some field of archaeology are well tuned to quantitative methods and don’t have these issues to the same degree.  Perhaps it is better to say that quantitative methods are no likely to be “mainstream” under these conditions. Was it ever mainstream or am I a Processual apologist? My point is that the combination of overblown/unreal expectations + lack of testing/metrics + general skepticism = a standard of naÏve falsificationism under which no model is capable of succeeding. APM or really any archaeological model is not likely these days to be posited as a universal generalization from observed data. Therefore, there is not need for discarding these models based on examples that do not fit. A model should be presented as a set of assumptions, an estimate of how well the observed data fit those assumptions, what we can reasonably generalize based on that fit.
    1. Simply, there are archaeological sites everywhere and we continue to prove this.  That is about as much as a universal statement about archaeological data as I can muster.  Because of this, the only model that is not falsifiable is a model where everything is site-likely and everywhere should be tested because we may miss a site.  Funny enough, that was pretty much the agreed upon sentiment of this symposium; sites are everywhere, we can not know why, and therefore everywhere needs testing.  Frankly,  I totally agree with this too.  But, if we are so comfortable with that understanding, then why hold this symposium on APM?  The short answer is because we cannot test everywhere, that there are some sites that we will never find, and that we intuitively know that structuring our observations though models can help us gain a new understanding; even if we don’t know how to get there.  The reality is that we can either keep going around the same pony ride or we can accept the limitations of our data, understanding, and models and move forward with that to develop methods that work within those constraints. The sooner we come to terms with that, the sooner we can begin to agree on how wrong a model can be and still be useful (to paraphrase Box 1976).  Then we can add models as a productive part of our archaeological tool kit and make better informed decisions. Or we can just keep talking about it…

 

 

 

 

 

Thoughts on Archaeological Predictive Modeling in the Northeast symposium (2016)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s