• 1 Post
  • 13 Comments
Joined 1Y ago
cake
Cake day: Feb 13, 2024

help-circle
rss

I fully agree. What worries me is if bad actors create bots that are able to overwhelm the human moderators.


Yes, strong moderation by members of the community is sufficient to recognize and remove bad (human) actors. The question is one of volume and overwhelming those human mods. GPT can create hundreds of bad-faith accounts.


It’s theirs. They can do whatever they want. Any limits their power within the instance/community is purely voluntary on the part of the owner.


Mods and admins on the Fediverse are not democratically elected, they have complete control. Accusing one of “power tripping”, in their own community, on the instance they presumably pay for, is not a rational accusation, since they definitionally cannot exist in a state of less power. What that community is trying to do is use the threat of public shaming to influence behavior. It’s how you get weak moderation and generic communities where bad actors can thrive. A community dedicated to “Stopping bad mods” sounds good on the surface, but it’s an argument made in bad faith.


Why are you putting up with a “shitty” mod? Are you trying to force your speech in a community who has asked you not to?


Great response, thank you. My concern is more so focused on future measures; what happens if/when registration applications are answerable by a bot? It’s not hard to imagine. What happens when a GPT powered bot leaves totally “normal” unique comments 90% of the time, but occasionally recommends a product or pushes a political agenda?


Further, there’s nothing that states an interest-based instance needs any registration. One could imagine a world where local instances have all the users and identities, and the interest based instances simply provide communities to the larger fediverse with no users of their own.

Yes, I’ve had this same thought and I think it’s a great model! If it comes to pass or not remains to be seen. But the concept is good!


“Power tripping mods” definitionally cannot exist on the fediverse where anyone can create an instance or community. Even on Reddit, 99% of the time someone said a mod was “power tripping” it was just a right winger upset that the mod removed their disruptive nonsense.

The purpose of communities like the one you linked to is to shame mods into employing a passive, generic bare-minimum style of moderation, when we should be encouraging the opposite if we want diversity in the fediverse.


What’s the incentive to operate an LLM on the fediverse that is truly helpful and not just trying to secretly sell something/push an agenda?


Any speculation as to what those tools might look like?


Thanks for the thoughtful response. I too think that regional instances would be ideal for a “backbone” of the social web. But at the same time, I feel that interest-based connection is a truly unique strength of the internet and it would be a sad thing to lose to the slop.

Ultimately, I think that more, smaller instances is likely the best “ultimate” defense against slop since there is no incentive for them to scale beyond their needs. But every instance admin is technically responsible for the content on all federated instances. Which can get overwhelming!


How can the Fediverse protect against AI slop?
The Fediverse is a great system for preventing bad actors from disrupting "real" human-human conversations, because all of the mods, developers and admins are all working out of a desire to connect people (as opposed to "trust and safety" teams more concerned about user retention). Right now it seems that the Fediverses main protection is that it just isn't a juicy enough target for wide scale spam and bad faith agenda pushers. But assuming the Fediverse *does* grow to a significant scale, what (current or future) mechanisms are/could be in place to fend off a flood of AI slop that is hard to distinguish from human? Even the most committed instance admins can only do so much. For example, I have a feeling all "good" instances in the near future will eventually have to turn on registration applications and only federate with other instances that do the same. But it's not crazy to imagine that GPT could soon outmaneuver most registration questions which means registrations will only slow the growth of the problem but not manage it long-term. Any thoughts on this topic?
fedilink

The fact that ✅ means “no” and ❌ means “yes” is very confusing.