I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.

  • 0 Posts
  • 8 Comments
Joined 1Y ago
cake
Cake day: Jun 29, 2023

help-circle
rss

I think there were historically interoperability issues, and there used to be (my version of mbin is quite old), and maybe still are issues federating dislikes (which stems from the way they were seen in kbin, which straddles both thread based and mastadonesque sides of the fediverse). But overall there’s aren’t the larger federation issues there used to be.

Right now, the choice mainly comes down to the interface you prefer, and if you perhaps want a limited ability to work with mastadon type posts. Since you can follow mastadon users and see their posts within the mbin interface.


Pretty much wanted to say similar. Ip address isn’t known beyond your local instance (and any retention time and purposes should be stated in their privacy policy).

The rest is standard data any federation app will collect upon seeing content from a user.

It’s also worth noting that in general the user URL (which provides this user data) is generally also public. So if you know the user url you can get this too.

Having said that, I do wonder how much they can monetize third party data about people that have not agreed to their privacy policy that grants such uses. It’ll be interesting to see.


I think it only recognizes as “valid”, lemmy instances. But kbin will be searched and it at least listed those instances federated with me.


Yes, it’s by far a better situation that the rest of the threadiverse goes on. But, it’s definitely a large block of communities over there.


Well, a lot of the communities are hosted on world. I know they make the biggest single contribution to my federation ingress. So, I mean you will kinda notice a bit of a content drought if they’re down long enough.


Whose fault is it though? If an instance is capable of 100 concurrent users but everyone flocks to the two or three big instances. What to do? Block instances so they shutdown? Then when the shit really hits the fan there’s nowhere to distribute users to.

In the case of lemmy.world I might suggest they split the instance. Original lemmy.world keeps the communities but has no users. Create a new instance and transfer the users. That way the first instance is dedicated to federating the communities, moving the real time user database hits to a separate database. I’d also suggest preventing the creation of new communities on that instance.

In real terms it’d have been better if the communities were shared between instances more. Making a more even spread of the one to many distribution efforts.


I don’t know that it’s a DB design flaw if we’re talking about federation messages to other instances inboxes (which created rows of that magnitude for updates does sound like federation messages outbound to me). Those need to be added somewhere. On kbin, if installed using the instructions as-is, we’re using rabbitmq (but there is an option to write to db). But failures do end up hitting sql still and rabbit is still storing this on the drive. So unless you have a dedicated separate rabbitmq server it makes little difference in terms of hits to storage.

It’s hard to avoid storing them somewhere, you need to be able to know when they’ve been sent or if there are temporary errors store them until they can be sent. There needs to be a way to recover from a crash/reboot/restart of services and handle other instances being offline for a short time.

EDIT: Just read the issue (it’s linked a few comments down) it actually looks like a weird pgsql reaction to a trigger. Not based on the number of connected instances like I thought.


I think it’s more like the previous commentor said. It’s the communities more than the users. Every post, comment, like needs to be sent to every other instance that subscribes to the community. I suspect it’s definitely connected to federation. The reason being, at 20:00 utc yesterday lemmy.world stopped sending my instance anything (previously it was between 2 and 5 messages a second). It only started again at around 00:00 utc. I wonder if they were slowly adding instances back to federation?

In any case the load for that many communities with that many other instances must be huge. The advantages of the fediverse requires that communities AND users are spread between instances. In the current climate, the super instances have most of both and it must be becoming exponentially harder to keep up with hardware requirements for this.