I think this is something which ought to be obvious but hasn't become fully so to a lot of "people in tech". We ought to be designing systems which make it easy for online communities to manage themselves, with a minimum of algorithmic follies.
For silo systems like Twitter and Facebook there are two modes of governance being followed:
The old way: centralized moderation You hire some censors, put them in an office and get them to spend all of their time going through flagged content and removing things. It's a high stress job with a rapid staff turnover, and the censorship policies are all made by a central committee. A central committee which governs for the whole planet. This is obviously unworkable because it can never understand local context, but it has been the Facebook way for at least a decade. In the last few years the limitations of this have become clearer and the cracks in the edifice are now showing.
The new way: algorithmic governance This is what Facebook is now pursuing. They know that they can't hire enough censors to implement more comprehensive human content moderation and so AI is their go-to solution. There's a magical belief that AI is going to solve the governance problem. But of course it isn't, and it may make matters worse, because ultimately algorithms don't understand the context of social situations. Without wisdom it's extremely hard to screen out algorithmic bias, and no ethics committee or big data mining solution is going to be able to make appropriate decisions on behalf of all the world's communities.
The future of the internet isn't going to be either of these things. It's going to be human community governance at a human scale. Not one committee per planet. One committee per community. Systems need to facilitate assignment of roles, setting of governance rules and ways to enforce the rules. They may also need to allow for ways to transact between communities. This is what self-governance means.