Coming in January 2019…! Buy it here
This is the era of Big Data and computational social science. It is an era that requires tools which can do more than visualise data but also model the complex relation between data and human action and interaction. Agent-Based Models (ABM) – computational models which simulate human action and interaction – do just that.
This textbook explains how to design and build ABM and how to link the models to Geographical Information Systems. It guides you from the basics through to constructing more complex models which work with data and human behaviour in a spatial context. All of the fundamental concepts are explained and related to practical examples to facilitate learning (with models developed in NetLogo with all code examples available on the accompanying website). You will be able to use these models to develop your own applications and link, where appropriate, to Geographical Information Systems.
All of the key ideas and methods are explained in detail:
- geographical modelling;
- an introduction to ABM;
- the fundamentals of Geographical Information Science;
- why ABM and GIS;
- using QGIS;
- designing and building an ABM;
- calibration and validation;
- modelling human behaviour;
- visualisation and 3D ABM;
- using Big Geosocial Data, GIS and ABM.
An applied primer, that provides fundamental knowledge and practical skills, it will provide you with the skills to build and run your own models, and to begin your own research projects.
One of the Geocomputation keynotes is going to be crowdsourced. This is the first time that this has happened at Geocomputation. The brave souls pulling this together are: Dr Adam Dennett and Dr Dianna Smith. I’ve added my thoughts/musing – copied below (please note these were written off the top of my head). Do get involved and give your opinions on the subject, you will be credited on the keynote – go here.
Thoughts / questions / musings / predictions / observations and things that are getting you all excited about the future of GeoComputation as a sub-discipline
“As I’m from Yorkshire, I can’t just post ‘excited’ things about Geocomputation – I have to start with some whinging to get comfortable. My area of Geocomputation, individual-based modelling, has several very important methodological issues to overcome; understanding patterns in spatio-temporal data, simulating (human) behaviour and most importantly robustly calibrating and validating simulation models. With the heralding of ‘big data’, we have a real opportunity to use new forms of micro data to both improve the realism of our models, but also to give the rigor to calibration and validation. However, this hasn’t happened? Why? Personally I think that researchers have got distracted from the big issues in IBM (and Geocomp more broadly) through both these new forms of data and the easy DIY IBM frameworks that are abundantly available. I feel that IDEs (e.g. Netlogo, perhaps not so much Repast) that allow ABMs to be rapidly thrown together are having a negative effect. Journals are full of models that have little engagement with theory and are poorly calibrated and validated. Why is this important? Well, as academics we want our work to have a positive societal impact and be taken up by policymakers. There are innumerable challenges that now face us e.g. dealing with an ageing population, creating smart and sustainable cities etc etc. Technologies such as IBM can provide valuable insight that can help policy-makers etc in solving some of these issues. But without robust calibration and validation of these approaches (comparable to that found in climate models), these models remain academic playthings. IBM, especially ABM is a bit of an anomaly as its developed rapidly in several silo’s over the past 20 years – there is no centrally held ‘best’ practice and the discipline certainly needs input from other areas such as maths (error quantification), physics (handling non-linearity and complexity), computing (large simulations), sociology, human geography and psychology (behavioural frameworks and theory) to progress. To move ABM forward, the community needs to work together – but where to start?
Geocomputation is a rapidly moving subject and I feel the definition is very dynamic, changing with the current fad e.g. most people would associated ABM with Geocomp rather than other approaches e.g Bayesian. However, if we strip it back to basics, its as Andy Evans describes “the art of solving complex problems with computers” – increasing computer power, technology (sharing and dissemination platforms) and more data give us the opportunity to solve (and contribute to) these problems, and this is possibly the most exciting part of Geocomputation. But as a community will we ever get our act together and realise this potential?”
In preparation for GeoComputation 2017, we’re now open for volunteers who would like to put on workshops. We welcome applications and suggestions in any GeoComputational area, new and more established. The main day for workshops will be the 3rd Sept 2017, just before the conference.
However, we would also like to run some little mini-hackathon things during some of the breaks/lunches, so if anyone has any ideas for short and sweet little interactive training bursts, or other activities that engage people in exciting technologies, please get in touch. All ideas welcome. Email Andy Evans on firstname.lastname@example.org, or email@example.com
Conference website here.
Fancy coming to London at the end of August? Looking for an exciting Geocomp session to show off your fancy research? Look no further…
Call for papers: **GeoComputation; the next 20 years**
We would like to invite abstracts for a session about the future or GeoComputation at the Royal Geographical Society Annual International Conference 2016 (RGS 2016) in London, Tuesday 30 August to Friday 2 September 2016.
Session outline: The use of fully programmable computers to construct spatial models and run spatial analyses stretches back to the use of ENIAC to calculate ballistic courses during the Second World War. As ENIAC was announced to the public in 1946, 2016 represents the 70th year of the public use of computers in geography. Perhaps more happily, it is also 20 years since the term “GeoComputation” was invented to draw together a disparate set of geographers doing computing in the 70s, 80s, and 90s at the 1996 “1st International Conference on GeoComputation” in Leeds, UK. In 2017, the community built around this conference will be celebrating its 21st birthday, reflecting on its successes, and future directions. As part of this celebration, we invite presentations for this session speculating on the future of computing in geography: potentials, problems, and predictions. What is the future? The Internet of Things? Group cognition modelling? Solar-system scale geomorphological modelling? Speculative discussions encouraged!
Please e-mail the abstract and key words with your expression of intent to Ed Manley (firstname.lastname@example.org
) by 12th February 2016
(one week before the RGS conference deadline).
An abstract should be no more than 250 words.
– Ed Manley, Centre for Advanced Spatial Analysis (CASA), UCL.
– Alison Heppenstall, School of Geography, University of Leeds
– Andrew Evans, School of Geography, University of Leeds
– Nick Malleson, School of Geography, University of Leeds
New paper just published…
Olner D; Evans A; Heppenstall A (2015) An agent model of urban economics: Digging into emergence, Computers, Environment and Urban Systems, . doi: 10.1016/j.compenvurbsys.2014.12.003
This paper presents an agent-based ‘monocentric’ model: assuming only a fixed location for firms, outcomes closely parallel those found in classical urban economic models, but emerge through ‘bottom-up’ interaction in an agent-based model. Agents make buying and movement decisions based on a set of simple costs they face from their current location. These spatial costs are reduced to two types: the costs of moving people and goods across geographical distances and the costs (and benefits) of ‘being here’ (the effects of being at a particular location such as land costs, amenities or disamenities). Two approaches to land cost are compared: landlords and a ‘density cost’ proxy. Emergent equilibrium outcomes are found to depend on the interaction of externalities and time. These findings are produced by looking at how agents react to changing four types of cost, two spatial and two non-spatial: commuting, wage, good cost and good delivery. The models explore equilibrium outcomes, the effect of changing costs and the impact of heterogeneous agents, before focusing in on one example to find the source of emergence in the externalities of agent choice. The paper finishes by emphasising the importance of thinking about emergence as a tool, not an end in itself.