Just an explorer in the threadiverse.

  • 0 Posts
  • 8 Comments
Joined 1Y ago
cake
Cake day: Jun 04, 2023

help-circle
rss

Lemmy world is under persistent denial of service attack in recent weeks: https://lemmy.world/post/2923697

The admins are aware and responding daily, the technical specifics of the attack keep changing as they close off one avenue of attack, the attackers switch to a slightly different approach in a game of cat and mouse.

There’s nothing you can do but wait, it will come back online… or use alts on other instances. Lemmy world has a competent admin team who is working hard to weather these attacks, but lemmy the software is not prepared for this kind of adversarial resource consumption so it’s a very hard job to both layer protections on top of lemmy and also to fix underlying issues so it’s natively more resilient.


Unless you’re dealing with that many, it’s unlikely to be the issue.You should do performance testing your roof while you’re using it to see what the actual bottleneck is.

Solid answer. To add some additional context:

  • If you have enough RAM (which today is 16gb-64gb), there will be little to no I/O to the application/program files after the first playback of a session. All that program data will get saved to a disk cache in ram and probably will generate no additional I/O for the remainder of the session.
  • If you have any audio tracks recorded from a mic or other audio input, these are the most likely source of high-throughput disk I/O during playback. And it’s these that you would want to isolate from your sample library. As noted, that impact might be little to none even for sizeable 64-track or 128-track projects running off an SSD. But if any change to disk layout matters, separating project audio from sample audio is near the top the list… much more so than separating programs from anything else.
  • If OP is debugging unreliable playback or recording, that’s much more likely to be either bad latency configs or expensive plugins saturating the CPU. I’d be real surprised if the issue was disk I/O unless they’re working on like a 4G laptop that can’t cache anything in ram. But even there… it’s a ram problem manifesting as disk I/O. Latency settings can be complex to debug, but are a common source of gappy playback/recording when CPU, disk, and ram aren’t saturated.

This leads into my next concern which is GDPR, because now i can’t be certain that a users data gets deleted upon their request and i’m not certain whether i would be liable since my instance federates with the malicious instance (which may also not be hosted in the EU which is itself problematic, and even if i’m not liable it’s still not great).

I’m not a lawyer, but I have done compliance work, but not for GPDR… so take with several grains of salt.

I’d be fairly surprised if other instances caching your data had any impact on your GPDR status (unless you wrongfully made that data public in the first place).

If WordPress.com hosts an intentionally public blog post for a user, and archive.org scrapes it and saves a copy, and the user deletes it from WordPress (which correctly handles the deletion), would GPDR hold WordPress liable for a different organization retaining a copy on a different server? It would surprise me if it did, I can’t imagine how anyone could be in compliance while hosting public content under any circumstances if that were so. ActivityPub is not exactly the same as this, as it automates the process of copying data to many servers. But so does RSS and that’s not new. If this were an issue, I think we’d have seen examples of it before now.

It’s more likely that each ActivityPub instance is a different service from GPDR’s perspective, and each instance needs the capability to delete content associated with a user upon request. But I believe deletes are already federated by default, so we’re only talking about malicious instances that deliberately ignore deletion requests. These would not be GPDR compliant, but I suspect that doesn’t reflect on your liability.

… which may also not be hosted in the EU which is itself problematic…

Data locality is an interesting question, but I’m again inclined to suspect that YOU are not hosting data outside the EU. Other instances are, and the liability for doing so is theirs not yours.

If you were concerned about this, you could do whitelist federation where you explicitly add instances in appropriate jurisdictions rather than Federating by default with a blacklist. The opportunity cost of doing this is, of course, cultural irrelevance. You’d be cutting yourself off from most of the physical and virtual world in order to achieve improved data locality.

The loss of control over content is also something that i don’t particularly like…

This is real but rather the point of federation. If you really don’t like it, then federation is not for you. But consider multiple perspectives:

  • As a user of reddit or another centralized publishing platform, you already didn’t have control over your data. The hoster did, as did the untold millions who scraped it maliciously and silently. This does not compare favorably to the fediverse.
  • As an admin of a traditional forum like PHPBB, you do give up control in the Fediverse. Though when you account for malicious scrapers, how much you give up is debatable.
  • But as a user of that PHPBB forum, the fediverse gives you MORE control. If the admin of that non-federated forum throws a tantrum and shuts it down, the community and posts are lost. As a user in the Fediverse, federation allows users on other instances to retain their account identity, recover posts from caches, and re-establish their community elsewhere against the wishes of the previous hoster.

Federation does require the hoster to give up power, but more than equally increases the power of users in return. Like GPDR, federation aims at increasing the data autonomy of users, but rather than focusing on privacy and data destruction to facilitate a user who wants to take their toys and go home, it focuses on how users can continue to access their data usefully in the face of an admin who want to take their toys and go home. Although the means to achieve them are often in conflict… control over data destruction and control over data preservation are two sides of the same data-autonomy coin.


Read up on history. You have this completely backwards. It took many years of government intervention to force them to open their networks. And in some countries banks still don’t interoperate or charge obscene rates for it.

I have nothing backwards because I said nothing about cause and effect, you appear to have fabricated some historical error about regulation so you could have something to condescend to me about. But even so, regulators did not invent cross-network calls nor did they invent inter-bank transfers. Both of these industries had those things prior to regulatory mandates and went through “wild west” periods that have clear parallels to the fediverse today (the early 1900s for telephones and the 17th century for banks) when interoperation existed but was quite selective. My point was that mature federated ecosystems converge on valuing connectivity very highly, and the fact that this value was so clear in these two cases that it was eventually encoded in law supports rather than refutes that claim.


It is the ability of communities to choose not to federate with anyone else which gives Mastodon its strength.

There are zero mature federated ecosystems where this statement is true. While the freedom to (dis)associate is foundational to federated systems as an abuse management tool, it’s existentially dangerous when deployed as an idealogical weapon or negotiating lever.

  • The internet is federated, but you don’t see tier 1 ISPs de-peering each other over arguments on social media.
  • Email (which IS a great analogy… exactly because of the precedent for combatting abuse at scale) is federated, and you don’t see major providers blackholing major providers.
  • Telephone networks and the banking system are both federated, and generally major players don’t de-peer other major players within established ecosystems.

In all these cases, there were phases where the network was immature and these squabbles did happen. But players who isolated themselves lost relevance, and eventually the value of connecting to the wider network (with all of the challenges and opportunities that brings), became greater than the value of winning any other dispute.

This idea that de-peering everyone you don’t like is normal and how marginalized communities get protected is only popular right now for a short while because the fediverse only just barely matters at all, and almost everyone is willing to disrupt the health of the network is truly painful ways for any reason or no reason. If the fediverse doesn’t kill itself with infighting, the groups that find ways to address their disputes while remaining connected will come to form the fediverse that matters.

Of course, anyone who disagrees can defederate with anyone and everyone if they wish. But in so doing, they limit their own reach and relevance until eventually they’re left alone talking to themselves on a fedi-desert-island. I get marginalized communities not wanting to deal with the hassle of a growing network, but getting marginalized stories heard is one of the key ways to improve things going forward and defederate-first-ask-questions-later doesn’t help there.


I would argue that for most purposes, kbin IS lemmy. It has 1/10th the native user-count and 1/100th the native comment count according to https://the-federation.info/platform/73 and https://the-federation.info/platform/184. I get the sense that a large part of what people use kbin for is as an alternative UI to access lemmy communities. It seems much further from achieving a critical mass of native communities though.

That’s not a knock on kbin, people use it an enjoy it. But I’d contend that to the extent that either kbin or lemmy are “reddit replacements” at all, they act together as a single federated option with multiple UX’s rather than two discrete options.


You wrote the same URL twice ;)

Fixed. Thanks for pointing it out, mate.


There’s no official fediverse anything, the fediverse is an unorganized collection of:

  • Applications that share a somewhat compatible protocol by which they transfer information (ActivityPub). Mastodon, Lemmy, Friendica are all examples of Fediverse applications that have varying abilities to interoperate with each other. These applications are often created by organized projects that have their own websites. Like there is a Mastodon project with an official website at https://joinmastodon.org/ and there is a Lemmy project with an official website at https://join-lemmy.org/.
  • Instances of one of these applications that are fully compatible with other instances of the same application. So you can think of all the lemmy instances as a Lemmyverse where users on one Lemmy instance can talk to users on a different Lemmy instance through federation. Instances pretty much always have websites like mastodon.social and lemmy.ml.
  • Individuals using accounts on those instances. User accounts often have a page on their instance, like mine is https://lemmy.world/u/PriorProject.

The fediverse itself, though, has no official website. It’s the emergent collection of all of the above. There are websites that use the word “fediverse” in their name, but those sites are ABOUT the fediverse, or they might be PART OF the fediverse… but no one site IS the fediverse or can represent it or be the official home of it. It similar to the internet. There happens to be a website at https://www.internet.com/. But it’s obviously not like… the internet… or the official website of the internet. There is no official website of the internet, that’s just a random website that happens to be ON the internet. The fediverse works the same way, it’s an abstract idea used to describe a loosely affiliated collection of things that try to interoperate with each other in a variety of complicated ways.