Usually, these algorithms of big webpages are needlessly complex, because they need to be resilient to people trying to game the system. So, yeah, it may be good at what it does, but I doubt it would be terribly useful for e.g. Mastodon to adopt…
Personally, I wouldn’t say that an algorithm that relies on obscurity (needless complexity being a form of obscurity) would be a good algorithm, not when it’s public. I guess we’ll see.
It’s possible that the algorithms will have to be heavily refactored, cleaned up and maybe simplified before they are publicly released, since I expect that many of those approaches would be useless against someone with access to the code and the ability to run tests against it systematically to “game the system”.
Yeah, if you open-source an obscure algorithm, you lose the “security by obscurity”.
Much like with encryption algorithms, you could push out the obscurity into parametrisation, but that only makes more transparent how the algorithm could work in theory.
In practice, it will still be obscured, which is where Musk supposedly wants more transparency.
So, yeah, either he doesn’t open-source it, the open-sourcing is useless for transparency or we’ll watch Twitter burning to the ground. 🙂
There’s still the chance that they have/make an algorithm that can actually be transparent without being exploitable in ways that are detrimental (which is what I would consider a “good algorithm”)… but I agree that this is the least likely outcome.
Still, I couldn’t care less about any of the other outcomes. I have nothing to lose whether Twitter burns or stays as it is 😁
Well, I’m of the opinion that creating such an algorithm isn’t possible, because it is fundamentally possible to game the system (by e.g. creating multiple accounts), and making transparent why a post is promoted also necessarily makes this transparent for anyone wanting to game the system.
Having said that, it seems Musk wants to enforce that all users need to verify as a real, unique person. That would make it harder to game the system, and then they could use an algorithm akin to those for governmental elections.
But yeah, then that algorithm again isn’t useful by itself.
I also doubt the EU will be amused by his plans to nuke user privacy for no real reason.
I’m not opposed to him burning down Twitter either, though. ¯\_(ツ)_/¯
Hmm… that’s interesting actually. Having users have to authenticate might help some instances of trolling and abuse, but at the same time there’s the problem of the identification causing trouble for privacy.
A middle ground would be allowing non-verified users to participate, but have them have a lower influence in the relevance of the content, perhaps having caps that limit how much non-verified influence can affect the weighted relevance of a post (so… content promoted by unverified accounts would be of a lower priority, and pushing it with a farm of non-verified bot accounts would not have much of an impact).
Of course there’s likely gonna be some level of bias based on who are the people who would go through the trouble of verifying themselves… but that’s not the same thing as not being transparent. Bias is gonna be a problem that you cannot escape no matter what. If a social network is full of idiots the algorithm isn’t gonna magically make their conversations any less idiotic. So I think the algorithm could still be a good and useful thing to come out of this, even if the social network itself isn’t.
Usually, these algorithms of big webpages are needlessly complex, because they need to be resilient to people trying to game the system. So, yeah, it may be good at what it does, but I doubt it would be terribly useful for e.g. Mastodon to adopt…
Personally, I wouldn’t say that an algorithm that relies on obscurity (needless complexity being a form of obscurity) would be a good algorithm, not when it’s public. I guess we’ll see.
It’s possible that the algorithms will have to be heavily refactored, cleaned up and maybe simplified before they are publicly released, since I expect that many of those approaches would be useless against someone with access to the code and the ability to run tests against it systematically to “game the system”.
Yeah, if you open-source an obscure algorithm, you lose the “security by obscurity”.
Much like with encryption algorithms, you could push out the obscurity into parametrisation, but that only makes more transparent how the algorithm could work in theory.
In practice, it will still be obscured, which is where Musk supposedly wants more transparency.
So, yeah, either he doesn’t open-source it, the open-sourcing is useless for transparency or we’ll watch Twitter burning to the ground. 🙂
There’s still the chance that they have/make an algorithm that can actually be transparent without being exploitable in ways that are detrimental (which is what I would consider a “good algorithm”)… but I agree that this is the least likely outcome.
Still, I couldn’t care less about any of the other outcomes. I have nothing to lose whether Twitter burns or stays as it is 😁
Well, I’m of the opinion that creating such an algorithm isn’t possible, because it is fundamentally possible to game the system (by e.g. creating multiple accounts), and making transparent why a post is promoted also necessarily makes this transparent for anyone wanting to game the system.
Having said that, it seems Musk wants to enforce that all users need to verify as a real, unique person. That would make it harder to game the system, and then they could use an algorithm akin to those for governmental elections.
But yeah, then that algorithm again isn’t useful by itself.
I also doubt the EU will be amused by his plans to nuke user privacy for no real reason.
I’m not opposed to him burning down Twitter either, though. ¯\_(ツ)_/¯
Hmm… that’s interesting actually. Having users have to authenticate might help some instances of trolling and abuse, but at the same time there’s the problem of the identification causing trouble for privacy.
A middle ground would be allowing non-verified users to participate, but have them have a lower influence in the relevance of the content, perhaps having caps that limit how much non-verified influence can affect the weighted relevance of a post (so… content promoted by unverified accounts would be of a lower priority, and pushing it with a farm of non-verified bot accounts would not have much of an impact).
Of course there’s likely gonna be some level of bias based on who are the people who would go through the trouble of verifying themselves… but that’s not the same thing as not being transparent. Bias is gonna be a problem that you cannot escape no matter what. If a social network is full of idiots the algorithm isn’t gonna magically make their conversations any less idiotic. So I think the algorithm could still be a good and useful thing to come out of this, even if the social network itself isn’t.