should the forum rules state no AI generated text?

should the forum rules state no AI generated text?


  • Total voters
    6

Users who are viewing this thread

How can the moderation team prove someone is using ChatGPT?
It is not stated in the rules that a user cannot use Ai Generated text as long as the post is not spam

The definition of spam includes, but is not limited to, the following actions:
  • Double-posting/Duplicate Threads — If you wish to contribute further to a conversation, and yours was the last post in the thread, please edit your last post instead of creating a new one. Likewise, if a topic is already being discussed, please use the existing thread rather than creating duplicate threads.
  • Derailing a thread's topic.
  • Bumping a thread.
  • Quote pyramids — Quotes embedded within quotes, which serve no purpose.
  • Making non-constructive posts.
  • Abusing the ‘Report’ feature.
  • Disruptive or persistently argumentative behaviour.
  • Using the wrong language — Language boards are created for users of shared languages to come together and discuss the games without the need for conversing in English. Posting in said boards in anything other than the intended language is considered a form of spam.
Banning based on suspected AI text is not fair there is no proof unless we must all maintain a general individual quality of writing what if one day I exceed my usual writing skills and write a fascinating long in depth text
 
Personal opinion: AI-generated text usually falls under "making non-constructive posts" because AI-generated text may have fluency and structure without saying anything meaningful. All fluff and no substance, in other words. It's not a matter of quality or quantity, but of content.

It's like referencing a meme in person. They're not your words, low effort, hollow, and kinda cringe.

Speaking as a moderator, we're still deciding policy on it but the general consensus is that AI-generated posts are undesirable.
 
I think it can be very amusing when the post is barely distinguishable from an actual post, but maybe a warning for spam if it's disruptive or too much. I don't think you need a new rule though. Besides, if it's really out of hand Roko can always moderate it out of existence before it even happened.
 
The issues described with AI generated texts can be said for human made text too.
AI is already better (or will soon be) at being coherent with correct grammar and spelling than what most people on this forum are capable of.
And since there's no way of knowing who's generating the text, you just go on moderating as you always have.
 
The issues described with AI generated texts can be said for human made text too.
AI is already better (or will soon be) at being coherent with correct grammar and spelling than what most people on this forum are capable of.
And since there's no way of knowing who's generating the text, you just go on moderating as you always have.
The largest concern raised is one regarding technical topics, where AI-generated text can look convincing and read as authoritative but be factually incorrect. At best it wastes the time of others in the discussion, but it could incorrectly inform decision-making as well. Many readers of such topics may lack the necessary depth of knowledge or experience to realize that what was posted is incorrect. These are topics where accuracy is critically important, and knowingly shoveling unvetted AI-generated text into replies is reckless spamming. I would argue that it is worse than posting incorrect information in one's own words, because one is easily attributable to ignorance while the other is willful disregard for the truth.

Of course, this does depend on our ability to distinguish authentic posts from generated posts, but we're at a time in the development of this technology that it is not very difficult to do so. Still, we don't want to adopt a policy of unilateral action in these cases, because one person's judgment isn't sufficient to reliably avoid false positives.

Lots of people who don’t speak English use google translate. Technically that’s AI generated text. As far as I’m concerned, if posts are constructive it doesn’t matter how they’re generated. Equally, spam is spam whatever it’s source and should be dealt with accordingly.
I would argue that not all spam is made equal, when one can take combinations of intent, context, origin, and content into consideration. Some things which fit a somewhat rigid definition of spam are copypastas, but some are so prolific that they have effectively become pop culture (like the NAVY SEAL copypasta) and so might be posted in some variations as a joke or reference. Similarly, flagrant self-promotion isn't allowed on the forum when it spreads across unrelated threads, but we allow users to post dedicated personal threads where they can freely ignore some aspects of the current spam rules and promote their own content/media/projects as they please.

With your example, nuance still applies, and people using translation services rarely do so with ill intent. The fact they are translating their text usually means they're trying to follow our rules which require posts on international boards to be written in English. On a technical note, translation services convert an existing text from one language to another with the goal of accurately conveying the same message in the most similar way, whereas the AI-generated posts we are concerned about are instead creating entirely new text from prompts.
 
That sounds like spamming, any kind of Intelligence present behind it or not. I wouldn't argue that it can't be used in malicious way, just that it should fall under usual moderation when it is.

At the chance of sounding ignorant, I'm just not sure it can be utilized any other way. Could they really be used to have 'Hey, bot, make a post about ABC, my opinion is XYZ, go boy, fetch'? Or is it like the art one, where it is good enough at random crapshoot that can be then taken the way you want?
 
That sounds like spamming, any kind of Intelligence present behind it or not. I wouldn't argue that it can't be used in malicious way, just that it should fall under usual moderation when it is.
That's the prevailing opinion so far, and an addition to the rules (if we make one) would most likely be an addendum to the spam rule to clarify that we consider AI-generated text to be spam.
 
The largest concern raised is one regarding technical topics, where AI-generated text can look convincing and read as authoritative but be factually incorrect. At best it wastes the time of others in the discussion, but it could incorrectly inform decision-making as well.

This is already a problem, in the warband days there were hundreds of myths flying around about modding and gameplay that nobody bothered to check. And given how little information there is online I highly doubt chatGPT would give even a remotely coherent answer to technical questions about either bannerlord or warband.

The rule just seems a bit ultraspecific to me and completely unenforceable. There were some silly threads around when people realised you could use it to generate PR speak, but its use is more like yet another meme trend than anything else
 
Could they really be used to have 'Hey, bot, make a post about ABC, my opinion is XYZ, go boy, fetch'?
Chat GPT can do exactly that. I had it try to write the abstract for a scientific paper on a given topic in my field just to see what it would produce, and while there were some things that were slightly off it was way, way better than many I have seen in actual papers, and it would have been very difficult for someone to tell that it was AI generated (basically impossible for anyone not knowledgeable on the topic).

It is definitely worse at writing other types of content, e.g. fiction (I asked it to write a short novel on a genetic fantasy orcs VS humans battle, but with the twist that the orcs were the good guys - the result was quite amusing).
 
Back
Top Bottom