The Fediverse is a great system for preventing bad actors from disrupting “real” human-human conversations, because all of the mods, developers and admins are all working out of a desire to connect people (as opposed to “trust and safety” teams more concerned about user retention).
Right now it seems that the Fediverses main protection is that it just isn’t a juicy enough target for wide scale spam and bad faith agenda pushers.
But assuming the Fediverse does grow to a significant scale, what (current or future) mechanisms are/could be in place to fend off a flood of AI slop that is hard to distinguish from human? Even the most committed instance admins can only do so much.
For example, I have a feeling all “good” instances in the near future will eventually have to turn on registration applications and only federate with other instances that do the same. But it’s not crazy to imagine that GPT could soon outmaneuver most registration questions which means registrations will only slow the growth of the problem but not manage it long-term.
Any thoughts on this topic?
Mods and admins on the Fediverse are not democratically elected, they have complete control. Accusing one of “power tripping”, in their own community, on the instance they presumably pay for, is not a rational accusation, since they definitionally cannot exist in a state of less power. What that community is trying to do is use the threat of public shaming to influence behavior. It’s how you get weak moderation and generic communities where bad actors can thrive. A community dedicated to “Stopping bad mods” sounds good on the surface, but it’s an argument made in bad faith.
Mods don’t pay for the instance, they aren’t in charge of any of it.
Some admins have strong policies against getting involved into moderation of communities, leaving potential power tripping mods unchecked.