Contribution to Geocomp 2017 Keynote

One of the Geocomputation keynotes is going to be crowdsourced.  This is the first time that this has happened at Geocomputation.  The brave souls pulling this together are:  Dr Adam Dennett and Dr Dianna Smith.  I’ve added my thoughts/musing – copied below (please note these were written off the top of my head).  Do get involved and give your opinions on the subject, you will be credited on the keynote – go here.

Thoughts / questions / musings / predictions / observations and things that are getting you all excited about the future of GeoComputation as a sub-discipline

“As I’m from Yorkshire, I can’t just post ‘excited’ things about Geocomputation – I have to start with some whinging to get comfortable. My area of Geocomputation, individual-based modelling, has several very important methodological issues to overcome; understanding patterns in spatio-temporal data, simulating (human) behaviour and most importantly robustly calibrating and validating simulation models. With the heralding of ‘big data’, we have a real opportunity to use new forms of micro data to both improve the realism of our models, but also to give the rigor to calibration and validation. However, this hasn’t happened? Why? Personally I think that researchers have got distracted from the big issues in IBM (and Geocomp more broadly) through both these new forms of data and the easy DIY IBM frameworks that are abundantly available. I feel that IDEs (e.g. Netlogo, perhaps not so much Repast) that allow ABMs to be rapidly thrown together are having a negative effect. Journals are full of models that have little engagement with theory and are poorly calibrated and validated. Why is this important? Well, as academics we want our work to have a positive societal impact and be taken up by policymakers. There are innumerable challenges that now face us e.g. dealing with an ageing population, creating smart and sustainable cities etc etc. Technologies such as IBM can provide valuable insight that can help policy-makers etc in solving some of these issues. But without robust calibration and validation of these approaches (comparable to that found in climate models), these models remain academic playthings. IBM, especially ABM is a bit of an anomaly as its developed rapidly in several silo’s over the past 20 years – there is no centrally held ‘best’ practice and the discipline certainly needs input from other areas such as maths (error quantification), physics (handling non-linearity and complexity), computing (large simulations), sociology, human geography and psychology (behavioural frameworks and theory) to progress. To move ABM forward, the community needs to work together – but where to start?
Geocomputation is a rapidly moving subject and I feel the definition is very dynamic, changing with the current fad e.g. most people would associated ABM with Geocomp rather than other approaches e.g Bayesian. However, if we strip it back to basics, its as Andy Evans describes “the art of solving complex problems with computers” – increasing computer power, technology (sharing and dissemination platforms) and more data give us the opportunity to solve (and contribute to) these problems, and this is possibly the most exciting part of Geocomputation. But as a community will we ever get our act together and realise this potential?”

Advertisements

ABM Congress, Washington

abmcongressI attended the International Congress on Agent Computing at George Mason University (US) last month.  It was organised to mark the 20th anniversary of the publication of Robert Axtell and Joshua Epstein‘s landmark work,  Growing Artificial Societies and as such was both a celebration and a reflection on how far the discipline has progressed over the last 20 years.

While it is clear that in some areas there has been great gains, such as the size and complexity of ABMs (not to mention the sheer number of applications – Robert Axtell in his presentation gave the following figures based on a keyword search of publications: 1K papers per year on IBM; 10K per year on MAS and 5K per year on ABM), I see these gains as mainly attributable to advances in software and availability of data and not because we are tackling the big methodological problems.  I would strongly agree with Axtell that ABMs are still ‘laboratory animals’ and not yet ready for uptake in policy.  This view surprisingly contrasted with Epstein who in his opening remarks described ABM as a ‘mature scientific instrument’, perhaps nodding towards the large numbers of (often bad) ABMs that are continually appearing.  However, Epstein did agree with Axtell in the discussion of several challenges/definitive work that ABM needs to take on such as creating cognitively plausible agents (accompanied by a big plug for Epstein’s recent book, Agent Zero, on this very topic), not getting side stepped by big data:  “Data should be as big as necessary, but no bigger” (a nice play on the Einstein ‘models should be as simple as possible, but no simpler’) and calibrating to large scale ABMs.

It is this last point, that of calibration and validation that can be blamed for my grumpy mood throughout most of the Congress presentations.  There was some fantastic work, creating very complex agents and environments, but these models were calibrated and validated using simple statistics such as R^2!  Complex models = (often) complex results, which in turn requires complex analysis tools.  By the time that my presentation time came around on the last afternoon, I was in the mood for a bit of a rant…which is exactly what I did! But I’d like to think I did it in a professional way…  I presented a joint talk with Andrew Crooks and Nick Malleson entitled “ABM for Simulating Spatial Systems: How are we doing?” which reflected on how well (or not) ABM of geographical systems has advanced over the last 20 years.

growthofgis

 

We argued that while as geographers we are very good at handling space (due to GIS), we’re not very good at representing the relationships and interactions (human to human and human to environment).  We also need to look closely at how to scale up individual agents; for example how can we take an agent created at the neighbourhood level, with its own rules and explicit use of space and scale this up to the city level (preserving all the characteristics and behaviours of that agent)?  Work needs to be done now to shape how we use Big Data to ensure that it becomes an asset to ABM, not a burden.  And then I moved on to calibration and validation!  It wasn’t all gloom, the presentation featured lots of eye candy thanks to Nick and Andrew.

While the congress brought together an interesting line up of interdisciplinary keynote speakers: Brian ArthurMike BattyStuart Kauffman and  David Krakauer  – all were men.  Of the 19 posters and 59 presentations,  only a handful were women.  I find this lack of diversity disappointing (I refer here to gender, but this could equally be applied to other aspects of diversity).  While women are in the minority in this discipline, we do have a presence and such an event reflecting on the past, and celebrating a promising future should have fully reflected this.

However, I don’t wish to end on a negative note, the Congress was fantastic in the breadth of work that it showcased, and because it was so small, it had a genuinely friendly and engaging feel to it.  The last word should go to Epstein who I felt summarised up ABM nicely with the following: “As a young science, [it has made] tremendous progress and [has great] momentum”.

Reference: 

Heppenstall, A., Crooks A.T. and Malleson, N. (2016)ABM for Simulating Spatial Systems: How are we doing? International Congress on Agent Computing, 29th-30th, November, Fairfax, VA.

Call for Papers – Symposium on Human Dynamics in Smart and Connected Communities: Agents – the ‘atomic unit’ of social systems?

Call for Papers – Symposium on Human Dynamics in Smart and Connected Communities: Agents – the ‘atomic unit’ of social systems?

We welcome paper submissions for our session(s) at the Association of American Geographers Annual Meeting on 5-9 April, 2017, in Boston.

Session Description:

By defining a social system as a collection of agents, individuals and their behaviors/decisions become the driving force of these systems. Complex global phenomena such as collective behaviors, extensive spatial patterns, and hierarchies are manifested through agent interaction in such a way that the actions of the parts do not simply sum to the activity of the whole. This allows unique perspectives into the inner workings of social systems, making agent-based modelling (ABM) a powerful and appealing tool for understanding the drivers of these systems and how they may change in the future.

What is noticeable from recent applications of ABM is the increase in complexity (richness and detail) of the agents, a factor made possible through new data sources and increased computational power. While there has always been ‘resistance’ to the notion that social scientists should search for some ‘atomic element or unit’ of representation that characterizes the geography of a place, the shift from aggregate to individual mark agents as a clear contender to fulfill the role of ‘atom’ in social simulation modelling. However, there are a number of methodological challenges that need to be addressed if ABM is to fully realize its potential and be recognized as a powerful tool for policy modelling in key societal issues. Most pressing are methods to accurately identify, represent, and evaluate key behaviors and their drivers in ABM.

We invite any papers that contribute towards this wide discussion ranging from epistemological perspectives of the place of ABM, extracting behavior from novel and established data sets to new, intriguing applications to establishing robustness in calibrating and validating ABMs.

Please e-mail the abstract and key words with your expression of intent to Andrew Crooks (acrooks2@gmu.edu) by 22nd October, 2016 (one week before the AAG session deadline). Please make sure that your abstract conforms to the AAG guidelines in relation to title, word limit and key words and as specified at:

An abstract should be no more than 250 words that describe the presentation’s purpose, methods, and conclusions.

Timeline summary:

  • 20th October, 2016: Abstract submission deadline. E-mail Andrew Crooks by this date if you are interested in being in this session. Please submit an abstract and key words with your expression of intent.
  • 24th October, 2016: Session finalization and author notification
  • 26th October, 2016: Final abstract submission to AAG, via http://www.aag.org. All participants must register individually via this site. Upon registration you will be given a participant number (PIN). Send the PIN and a copy of your final abstract to Andrew Crooks. Neither the organizers nor the AAG will edit the abstracts.
  • 27th October, 2016: AAG registration deadline. Sessions submitted to AAG for approval.
  • 5-9th April, 2017: AAG Annual Meeting.

Organizers:

  • Andrew Crooks, Department of Computational and Data Sciences, George Mason University.
  • Alison Heppenstall, School of Geography, University of Leeds.
  • Nick Malleson, School of Geography, University of Leeds
  • Paul Torrens, Department of Computer Science and Engineering, Tandon School of Engineering, New York University.
  • Sarah Wise, Centre for Advanced Spatial Analysis (CASA), University College London.

Space, the final frontier…

In the fastest ever journal submission to publication I have ever experienced, the following paper has just been published online, and free to grab a copy of:

“Space, the Final Frontier”: How Good are Agent-Based Models at Simulating Individuals and Space in Cities?

It is co-written with Nick Malleson and Andrew Crooks.

Here is the abstract to whet your appetite:

Abstract

Cities are complex systems, comprising of many interacting parts. How we simulate and understand causality in urban systems is continually evolving. Over the last decade the agent-based modeling (ABM) paradigm has provided a new lens for understanding the effects of interactions of individuals and how through such interactions macro structures emerge, both in the social and physical environment of cities. However, such a paradigm has been hindered due to computational power and a lack of large fine scale datasets. Within the last few years we have witnessed a massive increase in computational processing power and storage, combined with the onset of Big Data. Today geographers find themselves in a data rich era. We now have access to a variety of data sources (e.g., social media, mobile phone data, etc.) that tells us how, and when, individuals are using urban spaces. These data raise several questions: can we effectively use them to understand and model cities as complex entities? How well have ABM approaches lent themselves to simulating the dynamics of urban processes? What has been, or will be, the influence of Big Data on increasing our ability to understand and simulate cities? What is the appropriate level of spatial analysis and time frame to model urban phenomena? Within this paper we discuss these questions using several examples of ABM applied to urban geography to begin a dialogue about the utility of ABM for urban modeling. The arguments that the paper raises are applicable across the wider research environment where researchers are considering using this approach.

Question is, what famous line to try and get into the title of a paper next time..?

ABM and urban economics

This gallery contains 1 photo.

New paper just published… Olner D; Evans A; Heppenstall A (2015) An agent model of urban economics: Digging into emergence, Computers, Environment and Urban Systems, . doi: 10.1016/j.compenvurbsys.2014.12.003 Abstract This paper presents an agent-based ‘monocentric’ model: assuming only a fixed location for firms, outcomes … Continue reading