We have compiled a playlist of a few EngLaId talks that are available to watch on Youtube. There are a couple of videos of Chris Green giving talks on EngLaId topics and a couple of videos of Chris Gosden giving longer lectures on the project. There is also a video of Miranda Creswell drawing at Danebury and a short video of reactions to a talk given by Chris Gosden.
This is just a short post to announce the publication of our new paper on performing analysis of PAS data. It is open access and has been published by Internet Archaeology:
This study tackles fundamental archaeological questions using large, complex digital datasets, building on recent discussions about how to deal with archaeology’s emerging ‘data deluge’ (Bevan 2015). At a broad level, it draws on the unprecedented volume of legacy data gathered from many different sources – almost one million records in total – for the English Landscape and Identities project (Oxford, UK). More specifically, the paper focuses in detail on artefact evidence – material derived primarily from surface surveys, stray finds and metal detecting. Novel computational models are developed that extend and connect ideas from usually distinct research realms (different arenas of artefact research, digital archaeology, etc.). Major interpretative issues are addressed including how to approach background factors that shape the archaeological record, and how to understand spatial and temporal patterning at various scales. Overall, we suggest, interpreting large complex datasets sparks different ways of working, and raises new theoretical concerns.
Here is a short video demonstrating the usage of the Portal to the Past website:
The website can be found at: http://englaid.arch.ox.ac.uk
EDIT: Here is a second longer video with audio commentary:
This is just a short post to announce the launch of our ArcGIS WebApp that enables the exploration of a limited version of our dataset in a web-mapping environment.
A user guide and links to the WebApp can be found by clicking on the Portal to the Past page in the menu above.
The WebApp itself can be found here: http://englaid.arch.ox.ac.uk
I have been thinking a lot recently about using maps as effective tools for visual communication of data. Chen et al. (2014) wrote that visualization of data should be about getting your message across in a time-efficient manner, which Kent (2005) stated depends upon producing aesthetically pleasing results. All maps (being one form of data visualization) are imperfect models of the world (as all models are imperfect) and we must take care to make sure that our maps communicate the messages we wish to express effectively.
Without wishing to get unduly political, I want to work through these ideas using the example of this summer’s “Brexit” vote. Data on the referendum results can be found here and data on UK boundary lines here. There are many (infinite?) different potential ways of visualising this data spatially, but I am going to explain the messages I see in a few examples here.
First up, we have a simple rendering of the results using the district divisions by which the data was originally counted and parcelled up, in which the saturation of the yellows (remain) and blues (leave) show the percentage lead each vote had in districts which each side “won”:
Yellow and blue have been used as that seems to be the convention settled on by most of our media. This map shows which areas felt particularly strongly one way or the other about the question asked and works well in that regard. However, it also gives a somewhat misleading message, as some of the high value districts are of relatively low population density. As an alternative then, we can keep the same division into “leaver” and “remainer” districts, but instead use the shading to show population density:
This map loses the nuance of showing how strong the vote was in either direction, but gains something by showing which districts have more people living in them. Most notable is the stark difference between the districts in eastern England around The Wash, which are of low population density (for the UK!), but which felt very strongly that the UK should leave the EU.
We can also look at the result in much more stark terms. The recent High Court decision has increased the likelihood of their being a Parliamentary vote on invoking Article 50, so I wanted to see which way the various constiuencies fell in terms of “leave” or “remain”. This is not simple, however, as the results were reported using districts, which often do not match constiuencies. As such, I reapportioned the vote from districts between consituencies on the basis of spatial area (e.g. if a constiuency covered half a district, it would receive half the votes). This is imperfect, as population density is not uniform across any district, but was the best I could do with the data to hand. The results show that, if Parliament does get to vote on Article 50 and MPs vote as their constituents voted, then “Leave” will comfortably win (Northern Ireland has not been included, but does not have enough MPs to make a difference either way):
All of these maps work reasonably well at expressing one element of the data, but I wanted to come up with a visualization that produced a more complex picture of the results yet without abandoning geographic space (i.e. I did not want to use a cartogram):
This final map reworks the results into hexagonal spatial bins, using the same method as when I reworked the results into constiuencies (i.e. assignment by spatial area overlap). Here, the blue / yellow shading has returned to showing the strength of the result, but we can now also see data on population at the same time through the thickness / blackness of the lines around the hexagons. I feel that this map does a pretty good job of showing the distribution of the vote (spatially, strength-wise, and population-wise) whilst still allowing people to locate themselves reasonably well geographically (which would not be the case with a cartogram). Hexagons have been preferred over squares largely to their visual appeal and due to the fact that humans have a tendency to see false straight lines in data binned into square-based grids.
Whatever you think of the referendum result, I hope that my worked example has helped to explain how making a map is not always a simple task. Careful thought about audience, message, and data structure needs to go into any visualisation if effective communication is to be achieved. I hope that my final map succeeds in that task!
Chen, M., L. Floridi, and R. Borgo. 2014. What is visualization really for? The Philosophy of Information Quality. Springer Synthese Library Volume 358, 75-93
Kent, A.J. 2005. Aesthetics: a lost cause in cartographic theory? Cartographic Journal 42(2), 182-188
All maps contain Ordance Survey data (C) Crown copyright and database right 2016
Our new paper has just gone up for online first access. It’s available here if you have access to the Archaeological Journal via a library / university:
Ever since I was an undergraduate (and attempted to write a “mental geography” of Roman Britain for my dissertation), I have been interested in Claudius Ptolemy’s Geography. Ptolemy was an Alexandrian Greek and his Geography dates to the mid second century AD: it contains coordinates from which it is possible to make maps of the entire known world at that time, including data representing the earliest surviving reasonably accurate survey of the British Isles. For the purposes of the EngLaId Atlas, that I am currently working on, I decided to see if I could plot Ptolemy’s Britain (or Albion as he called it) over the modern OS map.
To do so, I copied out the coordinates for Ptolemy’s places (representing points along coastlines, islands, and major settlements) from Rivet & Smith 1979. I suspect that there may be one or two typos in their lists (as a couple of the points in the final maps are not quite in the same place as they are on Rivet & Smith’s map), but I am not too worried about that for now. The task was then to convert Ptolemy’s coordinates so that they could be plotted onto the OS National Grid.
The first job was to correct for Ptolemy’s underestimate of the circumference of the planet (it was this underestimate that caused Columbus to be so confident about being able to reach the Indies by sailing west, thus accidentally discovering the Americas): to do so, all of the coordinates were first multiplied by 0.798. I then needed to recentre the coordinates so that they related to modern latitude / longitude: I used London / Londinium as a fixed point in both Ptolemy and the modern world, on the assumption that the provincial capital of Britannia ought to be relatively precisely located in Ptolemy’s data. This involved adding 8.41 degrees to each latitude measure and subtracting 16.06 degrees from each longitude measure.
I then created a shapefile in ArcGIS from the coordinate list using the WGS84 projection settings and then reprojected the map into OSGB 1936, ArcGIS’s representation of the OS National Grid. The points were then filtered out into islands, settlements, and coastline vertices. I had given the coastline points an “order” field (based upon the order of coordinates in Ptolemy) and used the Points to Line tool in ArcGIS to convert them to a line. I then converted the line to a polygon using Feature to Polygon. Finally, a few extra vertices were added to the coastline polygon using the editing tools in order to ensure that the settlement points were all on dry land. Here is the result:
Several things jump out. The most noticeable (and long commented on) is Ptolemy’s rotation of Scotland. Why he did this has been the subject of much debate, possibly being due to him believing that a N-S Scotland would extend too far north or possibly being due to a lack of reliable data on travel times through those non-Imperial lands. The latter is rather key to understanding the Geography: whereas latitude was fairly straightforward to calculate in the past, without chronometers longitude was much more difficult and relied largely upon calculations made using travel time itineraries. We can see the results of this in the way that most of the settlements in England / Wales are reasonably precise in their latitude (N-S) but much more imprecise in their longitude (E-W): York forms a good example. Overall, considering the time when it was constructed, Ptolemy’s Geography contains an impressive representation of Britain (south of Scotland).
I then experimented with a couple of transformations to see if I could improve the plotting onto the National Grid. First, I tried rotating the data so that the north of England more closely aligned with the modern map (actually an affine transformation using London, York and Chester as fixed points, so the geometry is slightly deformed, especially for Scotland):
The result is not really all that great, as the south of England then becomes much less closely aligned with the modern map. I also tried a rubbersheet transformation, using London as a fixed point and moving Ptolemy’s York onto modern York:
This turns the map into a really quite close approximation of the modern English / Welsh coastline, with the exceptions of the immense length of the south west and the rather stunted East Anglia. However, as it disturbs the geometrical relationship between Ptolemy’s coordinates, I decided in the end that my first model was probably the best: after all, I could keep adding points to the transformation until everything mapped perfectly onto the modern geography, but what would be the point of that? I would just be recreating the OS map.
This was just a short experiment for the purposes of debate and making a nice map. It seems likely that I may have done something spatially naive in plotting the data using the WGS84 settings, but the end results are rather pleasing in any event.
Rivet, A.L.F. & C. Smith. 1979 The Place-names of Roman Britain. London: Batsford.
The maps contain Ordnance Survey data (OpenData). (C) Crown Copyright and Database Right 2016.