Interested in the intersections between policy, law and technology. Programmer, lawyer, civil servant, orthodox Marxist. Blind.


Interesado en la intersección entre la política, el derecho y la tecnología. Programador, abogado, funcionario, marxista ortodoxo. Ciego.

  • 0 Posts
  • 8 Comments
Joined 1Y ago
cake
Cake day: Jun 05, 2023

help-circle
rss

So, not super sure what this is or how this works. Is the idea that you run the cgi, it sets up static files, and it responds to AP requests like follows, mentions, boosts and such? I realise lots of people don’t like long docs but I didn’t really understand the use case very well.


On my instance, the following control measures apply:

  • Only public posts are visible through the web interface.
  • Only public posts appear on RSS.
  • Following requires approval.
  • Authorised fetch is required.

So I think I have reason to feel fairly strongly that follower only posts are not public, and even unlisted posts are reasonably restricted.


I’ve had just this case. Wanted to use a particular crate that uses async and it’s forcing me to do lots of async things I’m unfamiliar with. I resent it a little, especially for a program that I’m fairly sure will not require concurrency of this sort.

At the same time, maybe I’ll get used to async rust if I use it enough. But so far I’m not having a lot of fun with it.


As far as I can tell, this is incorrect. If there’s a post on instance A, a reply from instance B, and someone on instance C follows the OP on A but not the RP on B, they will only see the OP without the reply.

Source: I very often notice this because I run a single-user instance, and when I open a thread it’s incomplete, lacking posts from instances that I have not suspended.


The biggest issues for me are:

  1. No centralisation means there’s no canonical single source of truth.
  2. Account migration.
  3. Implementation compatibility.

No single source of truth leads to the weird effect that if you check a post on your instance, it will have different replies from those on a different instance. Only the original instance where it got posted will have a complete reply set–and only if there are no suspensions involved. Some of this is fixable in principle, but there are technical obstacles.

Account migration is possible, but migration of posts and follows is non-trivial, Also migration between different implementations is usually not possible. Would be nice if people could keep a distinction between their instance, and their identity, so that the identity could refer to their own domain, for example.

Last, the issue with implementation compatibility. Ideally it should be possible to use the same account to access different services, and to some extent it works (mastodon can post replies to lemmy or upvote, but not downvote, for example).


Well, in a way that’s what we’re doing now, and by and large it works but obviously there’s some leakage, which is impossible to bring down to zero but which makes sense working on improving.

The other side of the coin is that the price of this moderation model is subjecting a lot more people to a lot more horrible shit, and I unfortunately don’t know any way around that.


Perhaps the manual reporting tool is enough? Then that content can be forwarded to the central ms service. I wonder if that API can report back to say whether it is positive.

The problem with a lot of this tooling is you need some sort of accreditation to use it, because it somewhat relies on security through obscurity. As far as I know you can’t just hit MS’s servers and ask “is this CSAM?” If something like that were possible it might work.

Can you elaborate on the hash problem?

Sure. When you have an image, you can do lots of things to it that change it in some way: change the compression, the format, crop it, apply a filter… This all changes the file and so it changes the hash. The perceptual hash system works on the basis of some computer vision stuff and the idea is that it will try to generate the same hash for pictures that are substantially the same. But this tech is imperfect and probably will have changes. So if there’s a change in the way the hash gets calculated, it wouldn’t be enough with keeping hashes, you’d have to keep the original file to recalculate, which is storing CSAM, which is ordinarily not allowed and for good reason.

For a hint on how bad these hashes can get, they are reversible, vulnerable to pre-image attacks, and so on.

Some of this is probably inevitable in this type of systems. You don’t want to make it easy for someone to hit the servers with a large number of hashes, and then use IPFS or BitTorrent DHT to retrieve positives (you’d be helping people getting CSAM). The problem is hard.

Personally I was thinking of generating a federated set based on user reporting. Perhaps enhanced by checking with the central service as mentioned above. This db can then be synced with trusted instances.

Something like that could work, maybe obscuring some of the hash content (random parts of it) so that it doesn’t become a way to actually find the stuff.

Whatever decisions are made have to be well thought through so as not to make the problem worse.


IMO the hardest part is the legal side, and in fact I’m not very clear how MS skirted that issue other than through US lax enforcement on corporations. In order to have a db like this one must store stuff that is, ordinarily, illegal to store. Because of the use of imperfect, so-called perceptual hashes, and in case of algorithm updates, I don’t think one can get away with simply storing the hash of the file. Some kind of computer vision/AI-ish solution might work out, but I wouldn’t want to be the person compiling that training set…