As I asked at the very begining, why not think the alternative way? And you give the answer, because the dev team does not care about the community. Then it is actually a good comparision
Why not think in a alternative way? The development team focusing more on solving compatibility and performance issues, so that the modding community can expand the horizon even further? Just like how rimworld community has been working out
hmmmmm, refer to my own programming experience, primary purpose of annotations(at least in-line annotations) is improving quality of life for the writers, not something about DoD. I originally start to wrote those annotations because of school marking scheme; but as time goes on, I keep doing so even though I no longer need to worry about such thing. Those annotations helps me a lot when I want to modify or review my code later, and thus improve my quality of life as programmer.
So this might be the case? If a programmer do not need to maintain his/her own code after completion, or programmer team of TW are changing fast and no programmer will stay in TW for a long time, annotations are useless for the writers if not specified in DoD.
hmmmmmm, not mod-friendly, I mean reader-friendly. The writers will also become readers when they need to modify or review the codes. This is the same as above.
I will try later, has been waiting for officials documentations since EA release.
I know we cant easily reference TW DLLs for making mod, but have a clear look on it will significantly help increase efficiency of modding process and compatibility of mods.
I have read some of them. I cant say I have checked all availiable .dll files, but among all the files I have read, I did not find any documentaion/annotations there, neither in-line or at the begining
ah, I might be using false term, usually confused about those terms. I mean annotations and documentations(both overall and in-line) for functions and codes.
def nn_update(model, eta):
""" Update NN weights.
:param model: Dictionary of all the weights.
:param eta: Learning rate
:return: None
"""
I come from CS background so I do know about how the machine works(though not familiar with game design major). And from what I know, this kind of crash does not look like induced by incapability of computing resulted from complexity/structure issues, as explained above.
I do not understand what you mean by "engineering perspective". But as a video game, from player perspective, it did affects gaming experience. Reinforcement issue has been discussed above. Another example I can come up with is tactics. With a small battlesize like 500 each side(which is the current cap), tactics and commanding troops won't affact battle much(players can barely have reserved/mobile forces that is effective enough to do something by themselves, so most battles are just one-wave rush and that's it, choice of tactics has been narrowed down)
The power of 2 cap suggests that kind of guess, and that's also what I have originally thought. But a year has passed and still no official interpretation released, regardless of so many similar forum posts.
Normally not linear, but normally gradual. I don't quite understand what you are saying in the second sentence, do you mean "slower" instead of "faster"?
Picture attached. The CPU and GPU occupation drops suddenly when I tried to load the battle scenes, which suggests the game has even not at least tried to do the job when the point has been passed.
It may not be able to go much higher than 2048, but then the performance is supposed to drop differently for differnent machines. Not like now, it suddenly crashed after passed an identical point for all machines no matter how the game perform before the point. In Warband, if you keep increasing battlesize, performance will drop gradually to some point you can not actually play the game (for example, super-low FPS)before crash. But in Bannerlord, the machine may perform normally before the break point and suddenly crash after the point has been passed. Most video games works in the Warband way.
Wait, did you say it's not hardcoded? Now you are saying it is hardcoded.
I know what you mean, that's why I focus on pathfinding part. Current AI do not support complex commandng, during field battle, most time we only have several big clusters of units before final charge(and fight normally end fast after that, or number of units on field reduced greatly after that, and during the fight we do not need complex pathfinding but simply find the closest enemy), so if pathfinding for units of a single cluster is computed as a whole, computational burden would be reduced greatly.
We are talking about PC, not console, common sense: select game setting with respect to your machine when playing on PC, the game can support something does not mean your machine can also support the same thing. Simple examples: DLSS/4k. Why can people understand that common sense when talking about graphical issues but cant understand when talking about battlesize? Recall Warband when we do not have such upper cap, will you further increase battlesize when you have already been suffering from significant performance loss? Your logic is really weird. DO NOT TREAT PLAYERS AS IGNORANT MONKEYS.
Then at least allow players do the choice themselves. Not like hard-coded this way.
That may be an explanation, but it is still weird since the limitation you described also need to be manually implemented(since this is not the upper cap of binary number machine can accept and understand). And if it were simply an allocation space issue, then it is suppsoed to be simple to alternate.(As far as i can image, like simply increase to a much larger number that no machine can actually reach, then game will crash with respect to capability of corresponding machine)
That might be the case, algorithm complexity may increase exponentially. But still, at least allow players to alternate the setting themselves, at least through mod. Common sense: modify product, which is Bannerlord under this context, at your own risk. DO NOT TREAT PLAYERS AS IGNORANT MONKEYS.
That's what I'm trying to ask: why and how is such cap implemented in the lower level code?
Then it's very weired why this cap is exactly the same for everyone and every machine. Normally if crash is induced by engine, through memory leak or something like that(since this crash is related to increased amount of computation required), the break point is supposed to be higher for better machine.
Not sure why there is a 2048 cap on battlesize. If it were hard-coded like this
if battlesize>2048:
raiseerror and exit
TBH, this just like typical TW comments on discarding features. Looks like it is talking about something at the first glance, but after a closer and careful inversitgation, it does not give any actual information, but only the result: discarded.