Prototype ABM of consumer behaviour

Last summer I worked with my colleague Dr Andy Newing and a Master’s dissertation student, Charlotte Sturley, who has just won the Royal Geographical Society GIS group prize for best dissertation.  Her work focused on classifying consumer data into several groups of behaviour and then building a prototype ABM using NetLogo.

This work posed several challenges: how do we translated observed behaviour into rules that an agent can operate satisfactorily? How should we represent time to mimic temporal as well as spatial patterns in different types of consumer behaviour?  Which of the many processes involved within this system should we include?  Charlotte’s dissertation (and upcoming paper) addresses these issues in-depth, but in brief the data was analysed in depth (using classification methods and spatial analysis tools) to identify different groups of individuals and their behaviour.  We built a highly abstract representation of Leeds which allowed us to match behaviour to the corresponding geodemographic classifications and add in real store distributions.   These can be seen below with the red blobs representing different types of stores and the coloured squares representing different areas of Leeds and the different consumer types that reside there.


This is, of course, a highly abstract representation of what is a very complex system and clearly a significant amount of development to the model would be required to fully replicate the real system.  However, one of the research questions that we were interested in addressing was whether  ABM could replicate the pull of consumers to a store based on distance and attractiveness i.e. could we embed this aspect of a spatial interaction model into an ABM?  The answer was yes, and this represents a potentially important shift in the methods by which retailers simulate the likely consequences of different policies on consumer behaviour.

More details on this work can be found in Charlotte’s upcoming paper.  A copy of the model code can be downloaded here.

ABM Congress, Washington

abmcongressI attended the International Congress on Agent Computing at George Mason University (US) last month.  It was organised to mark the 20th anniversary of the publication of Robert Axtell and Joshua Epstein‘s landmark work,  Growing Artificial Societies and as such was both a celebration and a reflection on how far the discipline has progressed over the last 20 years.

While it is clear that in some areas there has been great gains, such as the size and complexity of ABMs (not to mention the sheer number of applications – Robert Axtell in his presentation gave the following figures based on a keyword search of publications: 1K papers per year on IBM; 10K per year on MAS and 5K per year on ABM), I see these gains as mainly attributable to advances in software and availability of data and not because we are tackling the big methodological problems.  I would strongly agree with Axtell that ABMs are still ‘laboratory animals’ and not yet ready for uptake in policy.  This view surprisingly contrasted with Epstein who in his opening remarks described ABM as a ‘mature scientific instrument’, perhaps nodding towards the large numbers of (often bad) ABMs that are continually appearing.  However, Epstein did agree with Axtell in the discussion of several challenges/definitive work that ABM needs to take on such as creating cognitively plausible agents (accompanied by a big plug for Epstein’s recent book, Agent Zero, on this very topic), not getting side stepped by big data:  “Data should be as big as necessary, but no bigger” (a nice play on the Einstein ‘models should be as simple as possible, but no simpler’) and calibrating to large scale ABMs.

It is this last point, that of calibration and validation that can be blamed for my grumpy mood throughout most of the Congress presentations.  There was some fantastic work, creating very complex agents and environments, but these models were calibrated and validated using simple statistics such as R^2!  Complex models = (often) complex results, which in turn requires complex analysis tools.  By the time that my presentation time came around on the last afternoon, I was in the mood for a bit of a rant…which is exactly what I did! But I’d like to think I did it in a professional way…  I presented a joint talk with Andrew Crooks and Nick Malleson entitled “ABM for Simulating Spatial Systems: How are we doing?” which reflected on how well (or not) ABM of geographical systems has advanced over the last 20 years.



We argued that while as geographers we are very good at handling space (due to GIS), we’re not very good at representing the relationships and interactions (human to human and human to environment).  We also need to look closely at how to scale up individual agents; for example how can we take an agent created at the neighbourhood level, with its own rules and explicit use of space and scale this up to the city level (preserving all the characteristics and behaviours of that agent)?  Work needs to be done now to shape how we use Big Data to ensure that it becomes an asset to ABM, not a burden.  And then I moved on to calibration and validation!  It wasn’t all gloom, the presentation featured lots of eye candy thanks to Nick and Andrew.

While the congress brought together an interesting line up of interdisciplinary keynote speakers: Brian ArthurMike BattyStuart Kauffman and  David Krakauer  – all were men.  Of the 19 posters and 59 presentations,  only a handful were women.  I find this lack of diversity disappointing (I refer here to gender, but this could equally be applied to other aspects of diversity).  While women are in the minority in this discipline, we do have a presence and such an event reflecting on the past, and celebrating a promising future should have fully reflected this.

However, I don’t wish to end on a negative note, the Congress was fantastic in the breadth of work that it showcased, and because it was so small, it had a genuinely friendly and engaging feel to it.  The last word should go to Epstein who I felt summarised up ABM nicely with the following: “As a young science, [it has made] tremendous progress and [has great] momentum”.


Heppenstall, A., Crooks A.T. and Malleson, N. (2016)ABM for Simulating Spatial Systems: How are we doing? International Congress on Agent Computing, 29th-30th, November, Fairfax, VA.

Agent-based Modelling in Geographical Systems

Recently Andrew Crooks and I wrote a short introductory chapter entitled “Agent-based Modeling in Geographical Systems” for AccessScience (a online version of McGraw-Hill Encyclopedia of Science and Technology).

In the chapter we trace the rise in agent-based modeling within geographical systems with a specific emphasis of cities. We briefly outline how thinking and modeling cities has changed and how agent-based models align with this thinking along with giving a selection of example applications. We discuss the current limitations of agent-based models and ways of overcoming them and how such models can and have been used to support real world decision-making.
Conceptualization of an agent-based model where people are connected to each other and take actions when a specific condition is met

 Full Reference:

Heppenstall, A. and Crooks, A.T. (2016). Agent-based Modeling in Geographical Systems, AccessScience, McGraw-Hill Education, Columbus, OH. DOI: (pdf)

Charge of the Lycra Brigade…

20160422_083513Its inevitable that I would be drawn into doing some work about cycling – I live in a small market town that is overrun by crazy people donning lycra and heading to the hills.  The Tour de Yorkshire is the latest in a series of major cycle events that is coming through my town this weekend.  We were lucky enough to be on the route of Le Grand Depart back in July 2014, an event that did bring out the entire community.  While, we can muse about who actually comes to these events – they involve public money so therefore should be accessible and attended by all sections of society, right? – we don’t actually know for sure. This was the purpose of some work that I did with Matt Whittle and Nik Lomax last summer.  We worked with LCC on some very tasty data that was collated throughout Le Grand Depart.  The findings do indeed back up what you would expect, typically those who came to view the race fall into the category commonly labelled as MAMILS (Middle-aged men in lycra), this was particularly prevalent at the King of the Mountains sections.  Further details about this work can be found in the very catchy sounding Conversation article: Charge of the lycra brigade.

The Future of Geocomputation Workshop

Just returning from a workshop on Geocomputation at Kings College London.  The event was put together to bring researchers together from around the country to discuss the ‘future of Geocomputation’.  There were three keynotes (Chris Brundson, Alex Singleton and myself) each giving our different views on the future of Geocomputation.  Whilst we concentrated on different aspects (technology, in particular agent-based modelling for me, Bayesian approach called ABC for Chris and teaching of GIS for Alex), there was commonality in the areas that we felt future work was needed in such as data handling, visualisation, more engaging teaching methods and teaching programming to students.  For me, the future of Geocomputation is very much going to be shaped by developments in both agent-based modelling and big data.  Instead of developing agent frameworks (of which there are numerous – I did a head count of about 86), we should instead focus on tackling the thorny issues of identifying behaviour and processes in systems as well as calibration and validation.


This is something I will return to in a future post, but a copy of my slides can be found by clicking on Heppenstall.