Help! ABM and AI

Hello everyone!

I’m conducting my Master’s Thesis and I would like to improve the accuracy of Agent-Based Models using Machine Learning and Deep Learning during the modeling.

To achieve it, I need a current existing model with a data set for its validation. Can anyone help me? Thank you in advance peers!

Best Regards,
Francisco Rodes

Francisco, there are nearly 500 models in the CoMSES model library you can take a look at. Perhaps there is one in the library that will serve. Don’t forget to cite any model you use for this work.

Michael Barton

In one of my PhD projects, I was tried to integregate ABM with machine learning. So that, ABM can handle large scale data and be enhanced with high accuracy. See the published paper below.

Zhang, Haifeng, et al. “Data-driven agent-based modeling, with application to rooftop solar adoption.” Autonomous Agents and Multi-Agent Systems 30.6 (2016): 1023-1049.

Hope this is useful, and also let me know if you have any other quesiton.

1 Like

Thanks @haifengz! Adding a link to the article you listed:

@f.rodes have you read Bill Rand’s article from 2006?

Could you describe in more detail what you mean by improving the accuracy of an ABM / why you’d like to integrate ABMs and machine learning? Do you want the internal decision making within the model to include a machine learning component? Or use machine learning to discover the parameterizations that lead the model to produce a desired statistical pattern / distribution?

1 Like

Thank you @haifengz! I’have already read your paper and it’s cited in my report as an example, great work!

However, I wrote this post 2 months ago when I was starting my Master’s thesis. Now I understand much better ABM and the amount of time and data that is needed to carry out what I was intended. Therefore what I’m trying is to show how to integrate machine learning with ABM in the simplest way I have found which is to create agents at the El Farol Bar problem which use Neural Networks to decide. I would really have liked to make a bigger project but I have found a new area which I like a lot! So I’ll keep working on that direction. Thank you very much for the suggestion!


Hi @alee!

Yes, now I see how ABM are not that suitable for predicting and my intention was to use machine learning in both the angles you mention here. Give agent the capacity to decide through machine learning processes and discover parameters from data to generate rules; however, I didn’t have either the data nor the time so, just following the paper you mention and another which Rand and Forrest published in 2009 I’m trying to replace the Genetic Algorithm they implemented by different Neural Nets that hopefully will drive to the same solution.

Thank you very much for the advise! I appreciate it.


1 Like

Thanks for adding the link to paper and bringing up these interesting questions! I reread the Rand (2006) and have following thoughts. Traditional ABM work often deal with simple problems, but in practice, we will need a data-driven solution (i.e., learning from observational or experimental data and make decisions). Given the advances in ML and AI (automated, more scalable and accurate), why can’t we replace the simple agent rules by ML models? To answer your last two question, I meant having ML model as a component (decision engine) in agent-based simualtions, rather the second approach described in Rand’s paper.

1 Like

Glad you figured out a solution, Francisco. Look forward to seeing your model!

1 Like

I will upload it as soon as it is ready! Thanks for the help!

Hi @haifengz! :wave: I’m probably not the best person to ask since I’m not a model developer personally. But from my perspective having learned ABMs mostly through osmosis I think it would depend on your research question. If you are interested in discovering or analyzing the conditions under which simple rules & dynamics are capable of generating some emergent phenomena then a blackbox ML component wouldn’t be as useful since it’s not giving you insight into your theory. If you are more interested in scenario exploration and the effects that agents might have on some external state that they in turn depend on and don’t care as much about the theoretical underpinnings of the internal decisions they make (assuming you have some empirical heuristic to guide the evolution of your ML component) then an internal ML component could be an appropriate choice. I’d love to see more componentizable ABM modules that enable one to swap out the decision making component so we can explore this space more easily.

Although the recent resurrection of ML offers unique opportunities to “skip theory” for human decision making it’s also worth noting potential drawbacks e.g., algorithmic biases and biases in collected data used for training, etc. that Cathy O’Neill and others have documented - furthermore, it’s unclear whether it’s sound to base human decision making on past collected datasets since it seems culture and technology are evolving so quickly these days.

In any case, thanks for sharing your paper on using ML techniques to inform the adoption of rooftop solar - it was very interesting! I’ll need to read it more closely when I have time. Will you be making your rooftop solar ABM available on or other repository? I haven’t seen many examples of embedding ML in ABMs and look forward to what you and the rest of the community can do :grinning:

You are probably aware of these papers on human decision making in ABMs already but including just in case:

Another cautionary tale on bias in (text) ML models:

Good catch on the drawbacks of ML, alee. Well, I would say, ML’s ability to “skip theory” is not a bad thing in practice, especially, when there is no available theory or maybe our theory is incomplete. The complex, dynamic world we live in poses tremendous challenges to any models (not just ML). Ideally, we would like a computational model that can learn and adapt by itself to its environment. This, in fact, motives ML people to develop more intelligent and robust models, i.e., online, active, transfer and reinforcement learning. The “black box” ML model is certainly not suitable for decision making. But, lately, there is a strong trend in ML community that focuses on interpretability and causality, i.e., causal learning. Finally, bias is a common issue for many modeling paradigms. In ML, it’s often referred to overfitting. But, there is a widely used solution (although not perfect still limited by data): cross-validation. What is your opinion on the pros/cons of ABM? alee. I would like to hear more on the downside of ABM. I guess our discussion in this thread gets more and more interesting.

The Java source code (developed in Repast) for my data-driven ABM solar paper is located at:

More application is needed to see how robust this result is - but for now this seems to raise intriguing possibilities of applying AI / ML to complex systems analysis:

Thanks for posting your model! Well we always like to refer to the famous George Box quote,

All models are wrong, but some are useful.

I think ABMs are one of the best ways to study emergent complex phenomena but the Zen quote is also applicable You're perfect the way you are, and you could use some improvement!

There’s room for improvement in model platforms (I’m excited by the Python Mesa ABM framework since I enjoy writing in Python), supporting true parallelism / concurrency as Rob Axtell has mentioned, facilitating reuse so we aren’t constantly reinventing the wheel but can instead refine and support interoperable libraries of model components and pick or swap out the component relevant to your research question, improving how we document and share models (e.g., equivalent to Jupyter notebooks or RMarkdown), test their assumptions, validate and interpret their results. Hopefully we’re helping with some of this at :grin:. So there’s the usual slew of software engineering related problems, and then methodological / theoretical problems, and the self-reflective mess of how to properly embed human decision making / behavior in agents (if that’s an important component of your research question), which is how this whole thread got started. :cyclone:

As for bias in ML, I was referring to a more subtle / insidious bias than simple overfitting - this medium post and Cathy O’Neil’s book have some additional detail (it’s more to do with how we use the results from an algorithm along the lines of the Tyranny of Metrics):