Multi Agent Systems - Experiences in Mixed-paradigm Modeling with Envision, and Some Future Directions

Agent-based models are increasingly utilized as components of larger simulations and in increasing complicated domains. This presentation discusses one approach to representing agents in a multi-paradigm modeling context, in the application domain of alternative futures analyses. We examine representational requirements for agents in this context, based on lessons learned of using agent representations in a wide variety of application domains.

We describe Envision, a modeling platform and framework used for analysis of coupled human and natural systems, focusing on the Envision’s Actor representation. Our experience with Envision over the last two decades has suggested a set of requirements for utilizing Actors in real-world simulations of coupled human and natural systems, as well as providing some perspectives on emerging trends, needs and opportunities for ABMs. In particular, the implications of ubiquitous data, machine learning, an increasingly connected world, and increasingly autonomous agents is discussed.

For more information see also https://envision.bee.oregonstate.edu/

2 Likes

Great presentation, thanks for contributing! We’ve been working on a paper taking stock of how to build modular, reusable, and interoperable software components for use in integrated environmental modeling while being cognizant of the dangers of “integronsters” and this was very relevant. We also reference the Actor model of computation as a promising direction and we’ve been working on a prototype to integrate computational models implemented in different languages, combining DSSAT (Fortran) with LandLab (Python), and Mesa (Python) using an event-driven approach to communicate between models and represent the state of the simulation, with periodic snapshots for performance reasons and replays, e.g., event sourcing.

The gist being we have legacy models that represent decades of useful work that shouldn’t be thrown away, yet it’s often difficult to maintain them or add new features for a variety of reasons (e.g., the Lava Flow antipattern that’s unfortunately emblematic of academic software development). This problem won’t be fixed overnight, but we could make these legacy models more accessible by building data adapters to those models using a Docker sidecar approach that let you handle I/O to the legacy model without writing legacy code (from the modeler’s perspective, the developer of the adapter would still need to interface with the model).

Do you have any thoughts on this plug & play approach with models implemented in different languages? One concern I have about a million-line plus C++ application is the complexity of the overall framework; barriers to entry and the expertise needed to contribute & sustain the framework :grin: C++ is a complicated language! Overall though I’m very glad to hear about this project and think the approach being taken here is essential to a better understanding of the complex systems we’re likely-in-future-retrospect haphazardly managing.

As a sidenote, I was unable to access the SVN repository at https://envision.bioe.orst.edu/svn/Envision7 - it would be great to svn2git the repository and throw it on GitHub even as a mirror of the SVN repository for accessibility - you can also get a DOI for your Envision releases that can then be cited in publications via Zenodo - feel free to PM me if you’d like to discuss this further. This is part of my shill job as a member of the Force11 Software Citation Working Group :sweat_smile:

Thanks for the presentation. Envision is an impressive framework connecting natural and social sciences. I have one query on the future developments you envision. You focused on increasingly more intelligent agents. I wonder whether that is actually what we want. A lot of decision making is influenced by beliefs, alternative facts, religion, biases and other cultural phenomena. If the goal of a model exercise is to represent actual behavior of decision makers we need to find ways to capture those elements in more sophisticated ways than is used nowadays. Given a model with realistic agents, we may use optimization techniques on the best interventions (but acknowledging the many uncertainties in representing human decision making). I try to get a response too from the Cormas team (@nicolas_becu @pierrebommel) who purposely try to keep their models simple, but interactive.

Alee - thanks for your comments. I agree that 1) all those “old” models (or at least, some of them) should hav a pathway for being utilized in a modern framework, 2) “integronsters” (good term!) should be avoided where possible, and 3) multilanguage support is helpful (and relevant to (1)). We made a vary conscious choice to write Envision in C++, in significant part because of the need to allow plug-ins written in various languages to be incorporated into an Envision application, and C++ is about as close to a universal “glue” language out there. We have in fact written plugins for Envision in FORTRAN, Python, and Java, usually with a little C++ code in the middle to take care of any language-specific needs. This, coupled with performance and a robust object-orientation make C++ a good language, in our opinion, for the “under the hood” part of an application, where efficiency, representation power, and flexibility are arguably more important than ease of use. We think of this is a layered process: 1) a high-performance “kernel” providing the core functionality and plumbing (C++ is great for this), possibly a GUI layer (written in whatever is easiest), and the plug-ins, which come in two flavors - what we call “standard” plug-ins" that can be used with no coding (just a configuration file) and “custom” plugins, that we generally write in C++. As we (and others) develop more and more “standard” plugins that can be configured in pretty accessible ways (e.g. xml files), the need for custom plugins lessons - that’s really where we see alot of the Envision development work going now - creating an improving the standard plugins for creating a wider and wider variety of models. But, you’ve got to have a robust framework to start with, which is where Envision comes in.

One comment on the “million lines of code” perspective. Envision consist of two parts, an “Engine” which has no GUI and is OS-independent, but provides the core under-the-hood functionality and exposes a bunch of capabilities to plugins, and the Window GUI front end. As is typical, the GUI is responsible for the large majority of the code in Envision. The Engine is considerable smaller.

Regarding accessing our SVN repository for the source code - we just made some changes that hadn’t yet been pushed to the Envision website, but that has now been updated. You will find a link to our SVN site under the “Overview” menu. I agree that Git is probably a better solution - we are planning to migrate at some point reasonably soon, but that hasn’t quite made it up to the top of the priority list yet.

1 Like

Marco - Thank you for your comments. Your point about whether we want more intelligent agents is a good one. My own perspective is that it depends on what we want these agents/models to do for us. For alot of our work, we have pretty applied needs - incorporating decision-makers into models of real CHANS systems to assist in decision-making. in these types of applications, we want to both capture the decision-making behavior or real actors as much as possible, but we also want to provide our model actors with as much real-world knowledge as possible to have better informed decision-making, and better yet, have the capability for these actors to learn about their model system and adaptively improve decision-making as a result. There is a bit of a tension between reflecting “real world” actors in the model, versus have more “optimized” (a dangerous word I try to avoid when talking about complex models) decisions as model Actors potentially exceed human actors in there ability to make informed, quasi-optimal decisions. While our work to date has generally focused more on the former, we see increasing opportunities for the latter, to the point where we imagine sometime soon at least some of the decisions we explore might very well be negotiated by intelligent agents in the future. We also see learning/adaption as a key fundamental process in CHANS systems, one of the reasons we constructed Envision to provide it’s actors with the ability to observe dynamic landscape production signals they can pay attention to if they are so inclined. This opens up possibilities for actors to “sandbox” the model and explore the effects of different decision’s prior to making an actual decision in the model, potentially learning in the process. This is starting to go beyond what most real world actors do, but from an dynamical system “optimization” (ugghh) point of view, at some level we want our actors to be “smarter” than real world decision-makers. Just depends on the questions we are exploring.

RE: Simple models. I love simple models. They are great for providing insights.That said, in pretty much every project we’ve done, reality has intervened, and our models get more complex than we would like, because the
real world systems we work with are complex (and our stakeholders are generally keenly aware of the complexities in their systems that they expect to see addressed in the model system. Again, depends on the question being asked. For pursuing fundamental questions, simple models are often more appropriate, particularly if they are highly interactive. For modeling real-world systems, the complexity of reality can be hard to fully avoid. Envision can certainly be used in interactively explore simple systems, but that is not it’s strength - it is most suited to looking at more complex real-world systems where a variety of representational approaches, as well a computational efficiency, become important factors.

Hi John, I agree that it would be a very fruitful excercise to compare outcomes of “realistic” agents and more “intelligent” agents, to explore the gain one can receive with better coordination, information and planning.

The Cormas team (@pierrebommel @nicolas_becu) also works with stakeholders on real-world systems and have a different philosophy. I guess the different groups (Envision and Cormas) have different perspectives on how they want to use models with stakeholders. There is room for both approaches and perhaps it has to do with the type of political processes the groups get engaged with when they use models in decision making. I know the Cormas group better than the Envision group and I guess that the Cormas group’s starting point is a collaborative process where they want the stakeholders directly involved with the model development, understand their mental models, and use the modeling exercise as a tool for better understanding of the problem, and less for making scenario projections for the future.

2 Likes