There’s still the chance that they have/make an algorithm that can actually be transparent without being exploitable in ways that are detrimental (which is what I would consider a “good algorithm”)… but I agree that this is the least likely outcome.
Still, I couldn’t care less about any of the other outcomes. I have nothing to lose whether Twitter burns or stays as it is 😁
Well, I’m of the opinion that creating such an algorithm isn’t possible, because it is fundamentally possible to game the system (by e.g. creating multiple accounts), and making transparent why a post is promoted also necessarily makes this transparent for anyone wanting to game the system.
Having said that, it seems Musk wants to enforce that all users need to verify as a real, unique person. That would make it harder to game the system, and then they could use an algorithm akin to those for governmental elections.
But yeah, then that algorithm again isn’t useful by itself.
I also doubt the EU will be amused by his plans to nuke user privacy for no real reason.
I’m not opposed to him burning down Twitter either, though. ¯\_(ツ)_/¯
Hmm… that’s interesting actually. Having users have to authenticate might help some instances of trolling and abuse, but at the same time there’s the problem of the identification causing trouble for privacy.
A middle ground would be allowing non-verified users to participate, but have them have a lower influence in the relevance of the content, perhaps having caps that limit how much non-verified influence can affect the weighted relevance of a post (so… content promoted by unverified accounts would be of a lower priority, and pushing it with a farm of non-verified bot accounts would not have much of an impact).
Of course there’s likely gonna be some level of bias based on who are the people who would go through the trouble of verifying themselves… but that’s not the same thing as not being transparent. Bias is gonna be a problem that you cannot escape no matter what. If a social network is full of idiots the algorithm isn’t gonna magically make their conversations any less idiotic. So I think the algorithm could still be a good and useful thing to come out of this, even if the social network itself isn’t.
There’s still the chance that they have/make an algorithm that can actually be transparent without being exploitable in ways that are detrimental (which is what I would consider a “good algorithm”)… but I agree that this is the least likely outcome.
Still, I couldn’t care less about any of the other outcomes. I have nothing to lose whether Twitter burns or stays as it is 😁
Well, I’m of the opinion that creating such an algorithm isn’t possible, because it is fundamentally possible to game the system (by e.g. creating multiple accounts), and making transparent why a post is promoted also necessarily makes this transparent for anyone wanting to game the system.
Having said that, it seems Musk wants to enforce that all users need to verify as a real, unique person. That would make it harder to game the system, and then they could use an algorithm akin to those for governmental elections.
But yeah, then that algorithm again isn’t useful by itself.
I also doubt the EU will be amused by his plans to nuke user privacy for no real reason.
I’m not opposed to him burning down Twitter either, though. ¯\_(ツ)_/¯
Hmm… that’s interesting actually. Having users have to authenticate might help some instances of trolling and abuse, but at the same time there’s the problem of the identification causing trouble for privacy.
A middle ground would be allowing non-verified users to participate, but have them have a lower influence in the relevance of the content, perhaps having caps that limit how much non-verified influence can affect the weighted relevance of a post (so… content promoted by unverified accounts would be of a lower priority, and pushing it with a farm of non-verified bot accounts would not have much of an impact).
Of course there’s likely gonna be some level of bias based on who are the people who would go through the trouble of verifying themselves… but that’s not the same thing as not being transparent. Bias is gonna be a problem that you cannot escape no matter what. If a social network is full of idiots the algorithm isn’t gonna magically make their conversations any less idiotic. So I think the algorithm could still be a good and useful thing to come out of this, even if the social network itself isn’t.