I am presently engaged on getting easy multi-agent pathfinding and navigation working. I’ve gone via a number of papers in these areas and carried out them. What I am a bit confused about now as to what the overall strategy ought to be for multi-agent techniques on the whole (say for an RTS sport).
From what I have been in a position to glean thus far it appears that evidently there are two major parts to this downside:
1. Pathfinding – Step one is in fact to have the ability to discover a path to the vacation spot, and be capable of inform when a sure location is inaccessible. There are a number of approaches to this:
- 1.1 Dijkstra-type algorithms resembling A* – Essentially the most clearly strategy would simply be A*, however this has well-known issues with multi-agent
habits as a result of typically models clump up at choke factors. It is a
main flaw with Starcraft’s algorithm. I believe this challenge was
notably obtrusive with the Dragoon unit. - 1.2 Enhancements on 1.1 – There are a number of approaches right here. Some contain calculating a bunch of locations for a bunch of models and
pausing them once they’re about to hit one other unit. There’s additionally a
fancy strategy which makes use of time has a third spatial dimension and runs A*
via the brand new 3D area. - 1.3 Potential fields – Assign a scalar area (the potential) to every level in area and use gradient descent to maneuver brokers towards the
desired objective(s). Downside: models might get caught in native minima. - 1.4 Flowfields – That is probably the most lovely strategy I’ve seen thus far. Principally the pathfinding downside is modelled as discovering a
resolution to the Eikonal equation in 2D. Then through the use of the finite
distinction technique to unravel a discretized model of the Eikonal
equation, we get a vector area at every level in area that factors to
the optimum course for the agent to go to.
2. Navigation – For the approaches above, a typical downside (along with those already listed) is that when brokers get shut to one another, the collisions aren’t resolved very nicely. In my experiments I’ve discovered that if you happen to simply use a physics engine to resolve collisions, there could also be lots of “pushing”, whereas the fascinating consequence could be brokers figuring out to make approach for others. I’ve discovered the next approaches for this space:
-
2.1 Boids/Flocking – This strategy principally mimics massive flocks of animals like birds or bugs. A number of averaging algorithms are used to make sure that an enormous group retains a steady form, do not collide with each other, varieties some type of alignment in formation, and many others.
-
2.2 Velocity Obstacles – This strategy assigns a field of regard to every agent in entrance of them and has them predict their very own motion to see if there’s an object forward of them. Then alter their trajectories accordingly.
My query is then: Is the dichotomy I’ve described an correct description of how multi-agent techniques are developed in video video games? If that’s the case, how do you determine when to make use of one or the opposite? My hunch is that to search out paths to locations you’d use one of many pathfinding algorithms from 1., and for native collision decision you’d use one of many navigation algorithms from 2. Is that what individuals normally do? I am principally on the lookout for a high-level technique/overview right here.
I am presently making an attempt 1.4 (flowfield) + 2.1 (boids) for an RTS/RTT sort sport and the outcomes are first rate. I am simply questioning if there are higher approaches as nicely.