Just an explorer in the threadiverse.
Unless you’re dealing with that many, it’s unlikely to be the issue.You should do performance testing your roof while you’re using it to see what the actual bottleneck is.
Solid answer. To add some additional context:
This leads into my next concern which is GDPR, because now i can’t be certain that a users data gets deleted upon their request and i’m not certain whether i would be liable since my instance federates with the malicious instance (which may also not be hosted in the EU which is itself problematic, and even if i’m not liable it’s still not great).
I’m not a lawyer, but I have done compliance work, but not for GPDR… so take with several grains of salt.
I’d be fairly surprised if other instances caching your data had any impact on your GPDR status (unless you wrongfully made that data public in the first place).
If WordPress.com hosts an intentionally public blog post for a user, and archive.org scrapes it and saves a copy, and the user deletes it from WordPress (which correctly handles the deletion), would GPDR hold WordPress liable for a different organization retaining a copy on a different server? It would surprise me if it did, I can’t imagine how anyone could be in compliance while hosting public content under any circumstances if that were so. ActivityPub is not exactly the same as this, as it automates the process of copying data to many servers. But so does RSS and that’s not new. If this were an issue, I think we’d have seen examples of it before now.
It’s more likely that each ActivityPub instance is a different service from GPDR’s perspective, and each instance needs the capability to delete content associated with a user upon request. But I believe deletes are already federated by default, so we’re only talking about malicious instances that deliberately ignore deletion requests. These would not be GPDR compliant, but I suspect that doesn’t reflect on your liability.
… which may also not be hosted in the EU which is itself problematic…
Data locality is an interesting question, but I’m again inclined to suspect that YOU are not hosting data outside the EU. Other instances are, and the liability for doing so is theirs not yours.
If you were concerned about this, you could do whitelist federation where you explicitly add instances in appropriate jurisdictions rather than Federating by default with a blacklist. The opportunity cost of doing this is, of course, cultural irrelevance. You’d be cutting yourself off from most of the physical and virtual world in order to achieve improved data locality.
The loss of control over content is also something that i don’t particularly like…
This is real but rather the point of federation. If you really don’t like it, then federation is not for you. But consider multiple perspectives:
Federation does require the hoster to give up power, but more than equally increases the power of users in return. Like GPDR, federation aims at increasing the data autonomy of users, but rather than focusing on privacy and data destruction to facilitate a user who wants to take their toys and go home, it focuses on how users can continue to access their data usefully in the face of an admin who want to take their toys and go home. Although the means to achieve them are often in conflict… control over data destruction and control over data preservation are two sides of the same data-autonomy coin.
Read up on history. You have this completely backwards. It took many years of government intervention to force them to open their networks. And in some countries banks still don’t interoperate or charge obscene rates for it.
I have nothing backwards because I said nothing about cause and effect, you appear to have fabricated some historical error about regulation so you could have something to condescend to me about. But even so, regulators did not invent cross-network calls nor did they invent inter-bank transfers. Both of these industries had those things prior to regulatory mandates and went through “wild west” periods that have clear parallels to the fediverse today (the early 1900s for telephones and the 17th century for banks) when interoperation existed but was quite selective. My point was that mature federated ecosystems converge on valuing connectivity very highly, and the fact that this value was so clear in these two cases that it was eventually encoded in law supports rather than refutes that claim.
It is the ability of communities to choose not to federate with anyone else which gives Mastodon its strength.
There are zero mature federated ecosystems where this statement is true. While the freedom to (dis)associate is foundational to federated systems as an abuse management tool, it’s existentially dangerous when deployed as an idealogical weapon or negotiating lever.
In all these cases, there were phases where the network was immature and these squabbles did happen. But players who isolated themselves lost relevance, and eventually the value of connecting to the wider network (with all of the challenges and opportunities that brings), became greater than the value of winning any other dispute.
This idea that de-peering everyone you don’t like is normal and how marginalized communities get protected is only popular right now for a short while because the fediverse only just barely matters at all, and almost everyone is willing to disrupt the health of the network is truly painful ways for any reason or no reason. If the fediverse doesn’t kill itself with infighting, the groups that find ways to address their disputes while remaining connected will come to form the fediverse that matters.
Of course, anyone who disagrees can defederate with anyone and everyone if they wish. But in so doing, they limit their own reach and relevance until eventually they’re left alone talking to themselves on a fedi-desert-island. I get marginalized communities not wanting to deal with the hassle of a growing network, but getting marginalized stories heard is one of the key ways to improve things going forward and defederate-first-ask-questions-later doesn’t help there.
I would argue that for most purposes, kbin IS lemmy. It has 1/10th the native user-count and 1/100th the native comment count according to https://the-federation.info/platform/73 and https://the-federation.info/platform/184. I get the sense that a large part of what people use kbin for is as an alternative UI to access lemmy communities. It seems much further from achieving a critical mass of native communities though.
That’s not a knock on kbin, people use it an enjoy it. But I’d contend that to the extent that either kbin or lemmy are “reddit replacements” at all, they act together as a single federated option with multiple UX’s rather than two discrete options.
There’s no official fediverse anything, the fediverse is an unorganized collection of:
The fediverse itself, though, has no official website. It’s the emergent collection of all of the above. There are websites that use the word “fediverse” in their name, but those sites are ABOUT the fediverse, or they might be PART OF the fediverse… but no one site IS the fediverse or can represent it or be the official home of it. It similar to the internet. There happens to be a website at https://www.internet.com/. But it’s obviously not like… the internet… or the official website of the internet. There is no official website of the internet, that’s just a random website that happens to be ON the internet. The fediverse works the same way, it’s an abstract idea used to describe a loosely affiliated collection of things that try to interoperate with each other in a variety of complicated ways.
Lemmy world is under persistent denial of service attack in recent weeks: https://lemmy.world/post/2923697
The admins are aware and responding daily, the technical specifics of the attack keep changing as they close off one avenue of attack, the attackers switch to a slightly different approach in a game of cat and mouse.
There’s nothing you can do but wait, it will come back online… or use alts on other instances. Lemmy world has a competent admin team who is working hard to weather these attacks, but lemmy the software is not prepared for this kind of adversarial resource consumption so it’s a very hard job to both layer protections on top of lemmy and also to fix underlying issues so it’s natively more resilient.