Book Launch!

Amazingly we actually finished the book and still liked each other enough to tolerate being in the same room (the photo is proof!).  We shared the launch with the amazing Mike Batty and his new book.  A really lovely evening, thanks to CASA and Adam Dennett for funding and organising 🙂booklaunch


The text below is taken directly from Andrew Crooks’ excellent write up of the book.

It’s been a long time in the making but now “Agent-Based Modelling and Geographical Information Systems: A Practical Primer” has been published by Sage. We (Nicolas Malleson, Ed Manley, Alison Heppenstalland myself) approached this book from two standpoints. First, to provide a synthesis of the underpinning ideas, techniques andframeworks for integrating agent-based Screenshot 2018-12-03 16.08.56modelling and geographical information systems (GIS). Second, building on our experiences of teaching at various levels, to provide a practical set of information for the development of agent-based models for geographical systems.

From these two standpoints we have developed a book that provides a practical primer in the integration of agent-based modelling and geographical information systems. In outlining the subject we cover many examples of geographical phenomena, from linking the individual movements of pedestrians to aggregate patterns of urban growth, to the integration of social networks into modelling mobility. Through this text, we hope  the reader will understand how the field has developed, how agent-based models are different from other modelling approaches, and the future challenges we see lying ahead.

By using sample code and data (all of which can be found on the accompanying website we provide the reader with many of the basic building blocks for constructing agent-based models linked to geographical information systems. Throughout the book we use the software package NetLogo, as it provides an easy route to learn and build agent-based models (although in the appendix we provide links to other models created in other platforms).

For more information visit and if you wish to buy a copy you can: Amazon or Sage Publishing. We hope you enjoy it.

New book! ABM and GIS

Coming in January 2019…!   Buy it hereScreenshot 2018-12-03 16.08.56

This is the era of Big Data and computational social science. It is an era that requires tools which can do more than visualise data but also model the complex relation between data and human action and interaction. Agent-Based Models (ABM) – computational models which simulate human action and interaction – do just that.

This textbook explains how to design and build ABM and how to link the models to Geographical Information Systems. It guides you from the basics through to constructing more complex models which work with data and human behaviour in a spatial context. All of the fundamental concepts are explained and related to practical examples to facilitate learning (with models developed in NetLogo with all code examples available on the accompanying website).  You will be able to use these models to develop your own applications and link, where appropriate, to Geographical Information Systems.

All of the key ideas and methods are explained in detail:

  • geographical modelling;
  • an introduction to ABM;
  • the fundamentals of Geographical Information Science;
  • why ABM and GIS;
  • using QGIS;
  • designing and building an ABM;
  • calibration and validation;
  • modelling human behaviour;
  • visualisation and 3D ABM;
  • using Big Geosocial Data, GIS and ABM.

An applied primer, that provides fundamental knowledge and practical skills, it will provide you with the skills to build and run your own models, and to begin your own research projects.

New Paper on Crime Theory

One of my former students, Dr Nawaf Alotaibi has just published his first paper in the International Criminal Justice Review critiquing whether Western criminology theories are applicable to a non-Western context, such as in Saudi Arabia.  The paper can be downloaded here.




Crime within Arabic countries is significantly different from Western crime in type, frequency, and motivation. For example, motor vehicle theft (MVT) has constituted the largest proportion of property crime incidents in Saudi Arabia (SA) for decades. This is in stark contrast to Western countries where burglary and street theft dominate. Environmental criminology theories, such as routine activity theory and crime pattern theory, have the potential to help to investigate Arabic crime. However, there is no research that has sought to evaluate the validity of these theories within such a different cultural context. This article represents a first step in addressing this substantial research gap, taking MVT within SA as a case study. We evaluate previous MVT studies using an environmental criminology approach with a critical view to applying environmental criminology to an Arabic context. The article identifies a range of key features in SA that are different from typical Western contexts. These differences could limit the appropriateness of existing methodologies used to apply environmental criminology. The study also reveals that the methodologies associated with traditional environmental crime theory need adjusting more generally when working with MVT, not least to account for shifts in the location of opportunities for crime with time.

Contribution to Geocomp 2017 Keynote

One of the Geocomputation keynotes is going to be crowdsourced.  This is the first time that this has happened at Geocomputation.  The brave souls pulling this together are:  Dr Adam Dennett and Dr Dianna Smith.  I’ve added my thoughts/musing – copied below (please note these were written off the top of my head).  Do get involved and give your opinions on the subject, you will be credited on the keynote – go here.

Thoughts / questions / musings / predictions / observations and things that are getting you all excited about the future of GeoComputation as a sub-discipline

“As I’m from Yorkshire, I can’t just post ‘excited’ things about Geocomputation – I have to start with some whinging to get comfortable. My area of Geocomputation, individual-based modelling, has several very important methodological issues to overcome; understanding patterns in spatio-temporal data, simulating (human) behaviour and most importantly robustly calibrating and validating simulation models. With the heralding of ‘big data’, we have a real opportunity to use new forms of micro data to both improve the realism of our models, but also to give the rigor to calibration and validation. However, this hasn’t happened? Why? Personally I think that researchers have got distracted from the big issues in IBM (and Geocomp more broadly) through both these new forms of data and the easy DIY IBM frameworks that are abundantly available. I feel that IDEs (e.g. Netlogo, perhaps not so much Repast) that allow ABMs to be rapidly thrown together are having a negative effect. Journals are full of models that have little engagement with theory and are poorly calibrated and validated. Why is this important? Well, as academics we want our work to have a positive societal impact and be taken up by policymakers. There are innumerable challenges that now face us e.g. dealing with an ageing population, creating smart and sustainable cities etc etc. Technologies such as IBM can provide valuable insight that can help policy-makers etc in solving some of these issues. But without robust calibration and validation of these approaches (comparable to that found in climate models), these models remain academic playthings. IBM, especially ABM is a bit of an anomaly as its developed rapidly in several silo’s over the past 20 years – there is no centrally held ‘best’ practice and the discipline certainly needs input from other areas such as maths (error quantification), physics (handling non-linearity and complexity), computing (large simulations), sociology, human geography and psychology (behavioural frameworks and theory) to progress. To move ABM forward, the community needs to work together – but where to start?
Geocomputation is a rapidly moving subject and I feel the definition is very dynamic, changing with the current fad e.g. most people would associated ABM with Geocomp rather than other approaches e.g Bayesian. However, if we strip it back to basics, its as Andy Evans describes “the art of solving complex problems with computers” – increasing computer power, technology (sharing and dissemination platforms) and more data give us the opportunity to solve (and contribute to) these problems, and this is possibly the most exciting part of Geocomputation. But as a community will we ever get our act together and realise this potential?”

New Commentary Paper

The Geocomputation Conference series is coming home to Leeds (my home institution) this year.  The conference series will be 21 years old!  And to celebrate this landmark birthday, a few of the Geocomputation community were invited to contribute to a commentary article in the current issue of Environment and Planning B.  The article summarises a range of different views on how Geocomputation has developed over the past two decades, and certainly highlights some commonly shared frustrations.


Prototype ABM of consumer behaviour

Last summer I worked with my colleague Dr Andy Newing and a Master’s dissertation student, Charlotte Sturley, who has just won the Royal Geographical Society GIS group prize for best dissertation.  Her work focused on classifying consumer data into several groups of behaviour and then building a prototype ABM using NetLogo.

This work posed several challenges: how do we translated observed behaviour into rules that an agent can operate satisfactorily? How should we represent time to mimic temporal as well as spatial patterns in different types of consumer behaviour?  Which of the many processes involved within this system should we include?  Charlotte’s dissertation (and upcoming paper) addresses these issues in-depth, but in brief the data was analysed in depth (using classification methods and spatial analysis tools) to identify different groups of individuals and their behaviour.  We built a highly abstract representation of Leeds which allowed us to match behaviour to the corresponding geodemographic classifications and add in real store distributions.   These can be seen below with the red blobs representing different types of stores and the coloured squares representing different areas of Leeds and the different consumer types that reside there.


This is, of course, a highly abstract representation of what is a very complex system and clearly a significant amount of development to the model would be required to fully replicate the real system.  However, one of the research questions that we were interested in addressing was whether  ABM could replicate the pull of consumers to a store based on distance and attractiveness i.e. could we embed this aspect of a spatial interaction model into an ABM?  The answer was yes, and this represents a potentially important shift in the methods by which retailers simulate the likely consequences of different policies on consumer behaviour.

More details on this work can be found in Charlotte’s upcoming paper.  A copy of the model code can be downloaded here.

ABM Congress, Washington

abmcongressI attended the International Congress on Agent Computing at George Mason University (US) last month.  It was organised to mark the 20th anniversary of the publication of Robert Axtell and Joshua Epstein‘s landmark work,  Growing Artificial Societies and as such was both a celebration and a reflection on how far the discipline has progressed over the last 20 years.

While it is clear that in some areas there has been great gains, such as the size and complexity of ABMs (not to mention the sheer number of applications – Robert Axtell in his presentation gave the following figures based on a keyword search of publications: 1K papers per year on IBM; 10K per year on MAS and 5K per year on ABM), I see these gains as mainly attributable to advances in software and availability of data and not because we are tackling the big methodological problems.  I would strongly agree with Axtell that ABMs are still ‘laboratory animals’ and not yet ready for uptake in policy.  This view surprisingly contrasted with Epstein who in his opening remarks described ABM as a ‘mature scientific instrument’, perhaps nodding towards the large numbers of (often bad) ABMs that are continually appearing.  However, Epstein did agree with Axtell in the discussion of several challenges/definitive work that ABM needs to take on such as creating cognitively plausible agents (accompanied by a big plug for Epstein’s recent book, Agent Zero, on this very topic), not getting side stepped by big data:  “Data should be as big as necessary, but no bigger” (a nice play on the Einstein ‘models should be as simple as possible, but no simpler’) and calibrating to large scale ABMs.

It is this last point, that of calibration and validation that can be blamed for my grumpy mood throughout most of the Congress presentations.  There was some fantastic work, creating very complex agents and environments, but these models were calibrated and validated using simple statistics such as R^2!  Complex models = (often) complex results, which in turn requires complex analysis tools.  By the time that my presentation time came around on the last afternoon, I was in the mood for a bit of a rant…which is exactly what I did! But I’d like to think I did it in a professional way…  I presented a joint talk with Andrew Crooks and Nick Malleson entitled “ABM for Simulating Spatial Systems: How are we doing?” which reflected on how well (or not) ABM of geographical systems has advanced over the last 20 years.



We argued that while as geographers we are very good at handling space (due to GIS), we’re not very good at representing the relationships and interactions (human to human and human to environment).  We also need to look closely at how to scale up individual agents; for example how can we take an agent created at the neighbourhood level, with its own rules and explicit use of space and scale this up to the city level (preserving all the characteristics and behaviours of that agent)?  Work needs to be done now to shape how we use Big Data to ensure that it becomes an asset to ABM, not a burden.  And then I moved on to calibration and validation!  It wasn’t all gloom, the presentation featured lots of eye candy thanks to Nick and Andrew.

While the congress brought together an interesting line up of interdisciplinary keynote speakers: Brian ArthurMike BattyStuart Kauffman and  David Krakauer  – all were men.  Of the 19 posters and 59 presentations,  only a handful were women.  I find this lack of diversity disappointing (I refer here to gender, but this could equally be applied to other aspects of diversity).  While women are in the minority in this discipline, we do have a presence and such an event reflecting on the past, and celebrating a promising future should have fully reflected this.

However, I don’t wish to end on a negative note, the Congress was fantastic in the breadth of work that it showcased, and because it was so small, it had a genuinely friendly and engaging feel to it.  The last word should go to Epstein who I felt summarised up ABM nicely with the following: “As a young science, [it has made] tremendous progress and [has great] momentum”.


Heppenstall, A., Crooks A.T. and Malleson, N. (2016)ABM for Simulating Spatial Systems: How are we doing? International Congress on Agent Computing, 29th-30th, November, Fairfax, VA.