I can't believe Civ 5 was mentioned in the same sentence as good A.I. Civ 5 has no A.I. to speak of. None. Zilch. Zero. Turn off all victory conditions but conquest. Now watch the lame A.I.. You must combine arms in Civ5 to take a city because of the one unit per hex rule. The A.I. cannot combine arms. on any difficulty level. It can cheat to a science victory, sure...but overall compared to DE it's an easy win for DE.
I mentioned my Microsoft background and I have worked for 2 different Dev houses, one a very realistic Navy sim house and have on my own learned and developed A.I.
A.I. wasn't taught when i went to college. heck given my age (63) there weren't PCs. C++ or fancy scripting languages. We coded in assembly. Today in most cases A.I. still isn't taught to any usable degree in the classroom. And no Developer is going to tell one of it's employee's "Gee go take 3 years paid time and learn A.I."
The truth is most dev houses put little to no resources toward it. In my case I was allotted 14 days in the development cycle...and if you know A.I. you know how laughable that is.
No one would stand for it in a chess program. Heck, they expect the chess program to easily beat them...then learn and scale itself to the level of the player and teach the player. But somehow when A.I. isn't delivered in other games thier's a hord of defenders with the excuses and everyone just accepts it as the state of the industry or they kid themselves in believing good A.I. can't be achieved.
Of course good A.I. can be achieved. First if you know it and then have the time to actually develop it.
Sadly most game programmers really don't know it. If you don't study it on your own time you simply will never learn it as it certainly wasn't part of your course requirement. So what generally happens is the programmer uses a bunch of "IF/THEN" statements. This ad-hoc approach can work in simple terms but it gets unwelding past the most basic stuff. It can 'see and attack' for example...but anything more complex and it can get very hairy and even blow up at a certain point.
This is used most often in shooters, especially those born on an XBOX or Playstation. You can't use full blown languages on either...there's no memory for it. The playstation had but 256k...and it uses most for the animations and graphics and sound. It could access another 256k from the video ram. The xbox had but 512k as well. So what did they use the most? A scripting language, usually LUA. It's attractive because it has very little overhead.
In our world though it simply sucks. Remember Baldur's Gate anyone? remember you could assign A.I. behavior to your party members so they could act on their own? The default ones weren't that good. Healers never casted heal on hurt party members...etc Well you could write your own as a mod....using LUA. And many did. the problem was LUA was so slow even today that game will stutter if you load a modded script in.
That's because in reality LUA is a industrial configuration language used on machine assembly lines. You can build a data base but you can't access it on the fly...it has to completely fill up then you can pull that data...so in game terms simply looking at the Blackboard (the data the game stores everything and it's state- can take seconds to access.
It's a horrible language that far too many games use.
I"m not sure what DE uses but I'm posative it's not LUA.
In 2005 things got shook up. Monolith Productions released F.E.A.R. It blew people away. it wasn't the graphics or some new level design...it was the A.I. It could think. It took cover. It called for help. It worked with teammates. It flanked. frankly it done things no one had ever seen in a shooter. It was a PC game, and it used a full blown language with libraries (they did later port it to consoles via a third party dev but it lacked the A.I. of the computer version and did poorly).
The guy who did that was Jeff Orkin. That's where i started in my own learning of A.I. The guy does a lot of teaching and writing about A.I. and if anyone is interested in learning A.I. it's a great starting point.
So going back to the "IF/THEN" scene we all sorely experience in our games adding a little bit of structure to a bunch of otherwise disjointed rules maps over somewhat to the most basic of AI architectures—the finite state machine (FSM). The most basic part of a FSM is a state. That is, an AI agent is doing or being something at a given point in time. It is said to be “in” a state. The reason this organizes the agent behavior better is because everything the agent needs to know about what it is doing is contained in the code for the state that it is in. The animations it needs to play to act out a certain state, for example, are listed in the body of that state. The other part of the state machine is the logic for what to do next. This may involve switching to another state or even simply continuing to stay in the current one.
Usually state machines employ elaborate trigger mechanisms that involve the game logic and situation. For instance our “guard” state may have the logic, “if [the player enters the room] and [is holding a gun] and [I have the sword of Smiting], then attack the player” at which point my state changes from “guard” to “attack”. Note the three individual criteria in the statement. We could certainly have a different statement that says, “if [the player enters the room] and [is holding a gun] and [I DO NOT have the Sword of Smiting], then flee.” Obviously, the result of this is that I would transition from “guard” to “flee” instead.
So each state has the code for what to do while in that state and, more notably, when, if, and what to do next. While some of the criteria can access some of the same external checks, in the end each state has its own set of transition logic that is used solely for that state. Unfortunately, this comes with some drawbacks.
First, as the number of states increases, the number of potential transitions increases as well—at an alarming rate. If you assume for the moment that any given state could potentially transition to any of the other states, the number of transitions increases fairly quickly. If there are 4 states each of which can transition to 3 others for a total of 12 transitions. If we were to add a 5th state, this would increase to 20 transitions. 6 states would merit 30, etc. When you consider that games could potentially have dozens of states transitioning back and forth, you begin to appreciate the complexity. What really drives that issue home, however, is the realization of the workload that is involved in adding a new state to the mix. In order to have that state accessible, you have to go and touch every single other state that could potentially transition to it.
The second issue with FSMs is the predictability. The player soon learns the behavior and begins exploiting it.
At this point, it is useful to point out the different between an action and a decision. In the FSM above, our agents were in one state at a time—that is, they were “doing something” at any given moment (even if that something was “doing nothing”). Inside each state was decision logic that told them if they should change to something else and, in fact, what they should change to. That logic often has very little to do with the state that it is contained in and more to do with what is going on outside the state or even outside the agent itself.
For example, if I hear a gunshot, it really doesn’t matter what I’m doing at the time—I’m going to flinch, duck for cover, wet myself, or any number of other appropriate responses. Therefore, why would I need to have the decision logic for “React to Gunshot” in each and every other state I could have been in at the time? Introduce the behavior tree.
It separates the states from the decision logic. Both still exist in the AI code, but they are not arranged so that the decision logic is in the actual state code. Instead, the decision logic is removed to a stand-alone architecture called the behavior tree.
The main advantage to this is that all the decision logic is in a single place. We can make it as complicated as we need to without worrying about how to keep it all synchronized between different states. If we add a new behavior, we add the code to call it in one place rather than having to revisit all of the existing states. If we need to edit the transition logic for a particular behavior, we can edit it in one place rather than many.
Another advantage of behavior trees is that there is a far more formal method of building behaviors. Through a collection of tools, templates, and structures, very expressive behaviors can be written—even sequencing behaviors together that are meant to go together. This is one of the reasons that Behavior Trees have become one of the more “go-to” AI architectures in games having been notably used in titles ranging from Halo 2 and 3, to Spore.
Now add a Planner. While the end result of a planner is a state (just like the FSM and behavior tree above), how it gets to that state is significantly different.
Like a behavior tree, the reasoning architecture behind a planner is separate from the code that “does stuff”. A planner compares its situation—the state of the world at the moment—and compares it to a collection of individual atomic actions that it could do. It then assembles one or more of these tasks into a sequence (the “plan”) so that its current goal is met. In DE that might be to build it's economy or it's defenses before finally attacking the nearest percieved threat.
Unlike other architectures that start at its current state and look forward, a planner actually works backwards from its goal . For example, if the goal is “kill player”, a planner might discover that one method of satisfying that goal is to “shoot player”. Of course, this requires having a gun. If the agent doesn’t have a gun, it would have to pick one up. If one is not nearby, it would have to move to one it knows exists. If it doesn’t know where one is, it may have to search for one. The result of searching backwards is a plan that can be executed forwards.
The planner diverges from the FSM and BT in that it isn’t specifically hand-authored. Therein lies the difference in planners—they actually solve situations based on what is available to do and how those available actions can be chained together. One of the benefits of this sort of structure is that it can often come up with solutions to novel situations that the designer or programmer didn’t necessarily account for and handle directly in code.
As I mentioned Jeff Orkin used them in Monolith’s creepy shooter, F.E.A.R. His variant was referred to as Goal-Oriented Action Planning or GOAP. For more information on GOAP, see Jeff’s page,
http://web.media.mit.edu/~jorkin/goap.html
To sum it up there's a lot to A.I. and I haven't even touched Utility based systems or NNs (neural network) and i'll note my A.I. also includes personalities (20) that give weight to what the A.I. might do (a janitor will run and hide from a monster but a soldier will attack) but there's a small chance the Janitor is Bruce Willis so I'll give it 90-10 weight. But this way Patton can play like Patton and Rommel will behave like rommel and everything is much less predictable.
this took me years of course to learn on my own time- the point being good A.I. is possible...but you need to know it and you need the time to do it.
Knowing it I can see what is in play in DE from the above and I absolutely know it's not ad-hoc "IF/THEN" or even FSM alone. It may not yet be fully fleshed out but the core A.I. here is very good.