New Paper on Crime Theory

One of my former students, Dr Nawaf Alotaibi has just published his first paper in the International Criminal Justice Review critiquing whether Western criminology theories are applicable to a non-Western context, such as in Saudi Arabia.  The paper can be downloaded here.




Crime within Arabic countries is significantly different from Western crime in type, frequency, and motivation. For example, motor vehicle theft (MVT) has constituted the largest proportion of property crime incidents in Saudi Arabia (SA) for decades. This is in stark contrast to Western countries where burglary and street theft dominate. Environmental criminology theories, such as routine activity theory and crime pattern theory, have the potential to help to investigate Arabic crime. However, there is no research that has sought to evaluate the validity of these theories within such a different cultural context. This article represents a first step in addressing this substantial research gap, taking MVT within SA as a case study. We evaluate previous MVT studies using an environmental criminology approach with a critical view to applying environmental criminology to an Arabic context. The article identifies a range of key features in SA that are different from typical Western contexts. These differences could limit the appropriateness of existing methodologies used to apply environmental criminology. The study also reveals that the methodologies associated with traditional environmental crime theory need adjusting more generally when working with MVT, not least to account for shifts in the location of opportunities for crime with time.

Contribution to Geocomp 2017 Keynote

One of the Geocomputation keynotes is going to be crowdsourced.  This is the first time that this has happened at Geocomputation.  The brave souls pulling this together are:  Dr Adam Dennett and Dr Dianna Smith.  I’ve added my thoughts/musing – copied below (please note these were written off the top of my head).  Do get involved and give your opinions on the subject, you will be credited on the keynote – go here.

Thoughts / questions / musings / predictions / observations and things that are getting you all excited about the future of GeoComputation as a sub-discipline

“As I’m from Yorkshire, I can’t just post ‘excited’ things about Geocomputation – I have to start with some whinging to get comfortable. My area of Geocomputation, individual-based modelling, has several very important methodological issues to overcome; understanding patterns in spatio-temporal data, simulating (human) behaviour and most importantly robustly calibrating and validating simulation models. With the heralding of ‘big data’, we have a real opportunity to use new forms of micro data to both improve the realism of our models, but also to give the rigor to calibration and validation. However, this hasn’t happened? Why? Personally I think that researchers have got distracted from the big issues in IBM (and Geocomp more broadly) through both these new forms of data and the easy DIY IBM frameworks that are abundantly available. I feel that IDEs (e.g. Netlogo, perhaps not so much Repast) that allow ABMs to be rapidly thrown together are having a negative effect. Journals are full of models that have little engagement with theory and are poorly calibrated and validated. Why is this important? Well, as academics we want our work to have a positive societal impact and be taken up by policymakers. There are innumerable challenges that now face us e.g. dealing with an ageing population, creating smart and sustainable cities etc etc. Technologies such as IBM can provide valuable insight that can help policy-makers etc in solving some of these issues. But without robust calibration and validation of these approaches (comparable to that found in climate models), these models remain academic playthings. IBM, especially ABM is a bit of an anomaly as its developed rapidly in several silo’s over the past 20 years – there is no centrally held ‘best’ practice and the discipline certainly needs input from other areas such as maths (error quantification), physics (handling non-linearity and complexity), computing (large simulations), sociology, human geography and psychology (behavioural frameworks and theory) to progress. To move ABM forward, the community needs to work together – but where to start?
Geocomputation is a rapidly moving subject and I feel the definition is very dynamic, changing with the current fad e.g. most people would associated ABM with Geocomp rather than other approaches e.g Bayesian. However, if we strip it back to basics, its as Andy Evans describes “the art of solving complex problems with computers” – increasing computer power, technology (sharing and dissemination platforms) and more data give us the opportunity to solve (and contribute to) these problems, and this is possibly the most exciting part of Geocomputation. But as a community will we ever get our act together and realise this potential?”

New Commentary Paper

The Geocomputation Conference series is coming home to Leeds (my home institution) this year.  The conference series will be 21 years old!  And to celebrate this landmark birthday, a few of the Geocomputation community were invited to contribute to a commentary article in the current issue of Environment and Planning B.  The article summarises a range of different views on how Geocomputation has developed over the past two decades, and certainly highlights some commonly shared frustrations.


Prototype ABM of consumer behaviour

Last summer I worked with my colleague Dr Andy Newing and a Master’s dissertation student, Charlotte Sturley, who has just won the Royal Geographical Society GIS group prize for best dissertation.  Her work focused on classifying consumer data into several groups of behaviour and then building a prototype ABM using NetLogo.

This work posed several challenges: how do we translated observed behaviour into rules that an agent can operate satisfactorily? How should we represent time to mimic temporal as well as spatial patterns in different types of consumer behaviour?  Which of the many processes involved within this system should we include?  Charlotte’s dissertation (and upcoming paper) addresses these issues in-depth, but in brief the data was analysed in depth (using classification methods and spatial analysis tools) to identify different groups of individuals and their behaviour.  We built a highly abstract representation of Leeds which allowed us to match behaviour to the corresponding geodemographic classifications and add in real store distributions.   These can be seen below with the red blobs representing different types of stores and the coloured squares representing different areas of Leeds and the different consumer types that reside there.


This is, of course, a highly abstract representation of what is a very complex system and clearly a significant amount of development to the model would be required to fully replicate the real system.  However, one of the research questions that we were interested in addressing was whether  ABM could replicate the pull of consumers to a store based on distance and attractiveness i.e. could we embed this aspect of a spatial interaction model into an ABM?  The answer was yes, and this represents a potentially important shift in the methods by which retailers simulate the likely consequences of different policies on consumer behaviour.

More details on this work can be found in Charlotte’s upcoming paper.  A copy of the model code can be downloaded here.

ABM Congress, Washington

abmcongressI attended the International Congress on Agent Computing at George Mason University (US) last month.  It was organised to mark the 20th anniversary of the publication of Robert Axtell and Joshua Epstein‘s landmark work,  Growing Artificial Societies and as such was both a celebration and a reflection on how far the discipline has progressed over the last 20 years.

While it is clear that in some areas there has been great gains, such as the size and complexity of ABMs (not to mention the sheer number of applications – Robert Axtell in his presentation gave the following figures based on a keyword search of publications: 1K papers per year on IBM; 10K per year on MAS and 5K per year on ABM), I see these gains as mainly attributable to advances in software and availability of data and not because we are tackling the big methodological problems.  I would strongly agree with Axtell that ABMs are still ‘laboratory animals’ and not yet ready for uptake in policy.  This view surprisingly contrasted with Epstein who in his opening remarks described ABM as a ‘mature scientific instrument’, perhaps nodding towards the large numbers of (often bad) ABMs that are continually appearing.  However, Epstein did agree with Axtell in the discussion of several challenges/definitive work that ABM needs to take on such as creating cognitively plausible agents (accompanied by a big plug for Epstein’s recent book, Agent Zero, on this very topic), not getting side stepped by big data:  “Data should be as big as necessary, but no bigger” (a nice play on the Einstein ‘models should be as simple as possible, but no simpler’) and calibrating to large scale ABMs.

It is this last point, that of calibration and validation that can be blamed for my grumpy mood throughout most of the Congress presentations.  There was some fantastic work, creating very complex agents and environments, but these models were calibrated and validated using simple statistics such as R^2!  Complex models = (often) complex results, which in turn requires complex analysis tools.  By the time that my presentation time came around on the last afternoon, I was in the mood for a bit of a rant…which is exactly what I did! But I’d like to think I did it in a professional way…  I presented a joint talk with Andrew Crooks and Nick Malleson entitled “ABM for Simulating Spatial Systems: How are we doing?” which reflected on how well (or not) ABM of geographical systems has advanced over the last 20 years.



We argued that while as geographers we are very good at handling space (due to GIS), we’re not very good at representing the relationships and interactions (human to human and human to environment).  We also need to look closely at how to scale up individual agents; for example how can we take an agent created at the neighbourhood level, with its own rules and explicit use of space and scale this up to the city level (preserving all the characteristics and behaviours of that agent)?  Work needs to be done now to shape how we use Big Data to ensure that it becomes an asset to ABM, not a burden.  And then I moved on to calibration and validation!  It wasn’t all gloom, the presentation featured lots of eye candy thanks to Nick and Andrew.

While the congress brought together an interesting line up of interdisciplinary keynote speakers: Brian ArthurMike BattyStuart Kauffman and  David Krakauer  – all were men.  Of the 19 posters and 59 presentations,  only a handful were women.  I find this lack of diversity disappointing (I refer here to gender, but this could equally be applied to other aspects of diversity).  While women are in the minority in this discipline, we do have a presence and such an event reflecting on the past, and celebrating a promising future should have fully reflected this.

However, I don’t wish to end on a negative note, the Congress was fantastic in the breadth of work that it showcased, and because it was so small, it had a genuinely friendly and engaging feel to it.  The last word should go to Epstein who I felt summarised up ABM nicely with the following: “As a young science, [it has made] tremendous progress and [has great] momentum”.


Heppenstall, A., Crooks A.T. and Malleson, N. (2016)ABM for Simulating Spatial Systems: How are we doing? International Congress on Agent Computing, 29th-30th, November, Fairfax, VA.

Agent-based Modelling in Geographical Systems

Recently Andrew Crooks and I wrote a short introductory chapter entitled “Agent-based Modeling in Geographical Systems” for AccessScience (a online version of McGraw-Hill Encyclopedia of Science and Technology).

In the chapter we trace the rise in agent-based modeling within geographical systems with a specific emphasis of cities. We briefly outline how thinking and modeling cities has changed and how agent-based models align with this thinking along with giving a selection of example applications. We discuss the current limitations of agent-based models and ways of overcoming them and how such models can and have been used to support real world decision-making.
Conceptualization of an agent-based model where people are connected to each other and take actions when a specific condition is met

 Full Reference:

Heppenstall, A. and Crooks, A.T. (2016). Agent-based Modeling in Geographical Systems, AccessScience, McGraw-Hill Education, Columbus, OH. DOI: (pdf)

Charge of the Lycra Brigade…

20160422_083513Its inevitable that I would be drawn into doing some work about cycling – I live in a small market town that is overrun by crazy people donning lycra and heading to the hills.  The Tour de Yorkshire is the latest in a series of major cycle events that is coming through my town this weekend.  We were lucky enough to be on the route of Le Grand Depart back in July 2014, an event that did bring out the entire community.  While, we can muse about who actually comes to these events – they involve public money so therefore should be accessible and attended by all sections of society, right? – we don’t actually know for sure. This was the purpose of some work that I did with Matt Whittle and Nik Lomax last summer.  We worked with LCC on some very tasty data that was collated throughout Le Grand Depart.  The findings do indeed back up what you would expect, typically those who came to view the race fall into the category commonly labelled as MAMILS (Middle-aged men in lycra), this was particularly prevalent at the King of the Mountains sections.  Further details about this work can be found in the very catchy sounding Conversation article: Charge of the lycra brigade.

The Future of Geocomputation Workshop

Just returning from a workshop on Geocomputation at Kings College London.  The event was put together to bring researchers together from around the country to discuss the ‘future of Geocomputation’.  There were three keynotes (Chris Brundson, Alex Singleton and myself) each giving our different views on the future of Geocomputation.  Whilst we concentrated on different aspects (technology, in particular agent-based modelling for me, Bayesian approach called ABC for Chris and teaching of GIS for Alex), there was commonality in the areas that we felt future work was needed in such as data handling, visualisation, more engaging teaching methods and teaching programming to students.  For me, the future of Geocomputation is very much going to be shaped by developments in both agent-based modelling and big data.  Instead of developing agent frameworks (of which there are numerous – I did a head count of about 86), we should instead focus on tackling the thorny issues of identifying behaviour and processes in systems as well as calibration and validation.


This is something I will return to in a future post, but a copy of my slides can be found by clicking on Heppenstall.