• 2 Posts
  • 126 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • Right, so this is exactly the sort of “benefit” I never expect to see. This is not something that has happened to me in ~25 years of computer use, and if it does happen there are better ways to deal with it. Btrfs and zfs have quotas for this, but even if they didn’t it would not be worth the tradeoff for me. Mispredicting the partition sizes I’ll end up needing after years of use is both more likely to happen and more tedious to fix.


  • Are you going to dual boot? Do you have some other special requirement? If not, there’s no reason to overthink partitioning in my opinion. I did this for my main NVME:

    • Partition table: GPT
    • /boot : 1GB fat32 partition. Depending on your needs (number of kernels, initramfs’s, other OSs) you might be fine with 500MB or even less. But because resizing can be a pain and I have the space to spare, I would much rather overprovision.
    • / : LUKS2 partition containing a btrfs filesystem with all the remaining space

    I use a swap file so I don’t use a swap partition. If you want more control over specific parts of the filesystem, eg a separate /home that you can snapshot or keep when reinstalling the system, then use btrfs subvolumes. This gives you a lot of the features a partition would give you without committing to a specific size.

    This is the only partitioning scheme I have never regretted. When I’ve tried to do separate partitions I find myself always regretting the sizes I’ve allocated. On the other hand, I have not actually seen any benefit of the separation in practice.




  • I had been working for only a few months at my first job and it was the first time I could buy a desktop PC without very tight budget constraints. So I thought I’d look for a more midrange GPU for once. I wasn’t convinced it would be worth it but I said fuck it what’s the point of working and making money if I’m scared to spend it on something I want? So I bought an AMD Radeon 5700XT for ~400€ sometime around Christmas 2019. If you’ve been following PC hardware prices in the COVID era you know I’m extremely happy with my decision.



  • The problem with any excuse you make for Elon is that Elon is too stupid to keep his mouth shut and give the excuse any plausibility. After the nazi salute he went on Twitter to make nazi puns about it. It is certain beyond reasonable doubt that he knows exactly what the salute was. Even if you give him the insane benefit of the doubt that it was really “his heart going out” and accidentally looked like the salute, his having shown he knows what it looks like but never stating he does not actually believe in the ideology or want present himself as an ally to nazis is just as damning.



  • Maybe in some cases. But I’ve been requested by Google support to provide a video for a very simple and clear issue we were having. We have a contract with them and we personally brought up the issue to a Google employee during a call. There was no concern of AI generated bullshit, but they still wouldn’t respond without a video. So maybe there’s more to this trend than what you’re theorizing.







  • The DMCA takedown seems to be specifically about Ryujinx’s ability to decode ROMs. Circumventing DRM is in fact illegal according to the DMCA so they appear to have a valid argument. However, in their takedown notice they assume that the decryption keys are obtained illegally. I’m wondering if the DMCA forbids extracting the decryption keys (without distribution) from your own legitimately owned Nintendo hardware for personal backup. If so, then the Ryujinx feature might also be defensible.

    This also raises the question of whether an emulator could be made to work on already decrypted media and let you figure out how to do that yourself. Nintendo could argue that its main use is still to play illegally decrypted ROMs but the emulator would have a decent defense imo.


  • Basically, all encryption multiplies some big prime numbers to get the key

    No, not all encryption. First of all there’s two main categories of encryption:

    • asymmetrical
    • symmetrical

    The most widely used algorithms of asymmetrical encryption rely on the prime factorization problem or similar problems that are weak to quantum computers. So these ones will break. Symmetrical encryption will not break. I’m not saying all this to be a pedant; it’s actually significant for the safety of our current communications. Well-designed schemes like TLS and the Signal protocol use a combination of both types because they have complementary strengths and weaknesses. In very broad strokes:

    • asymmetrical encryption is used to initiate the communication because it can verify the identity of the other party
    • an algorithm that is safe against eavesdropping is used to generate a key for symmetric encryption
    • the symmetric key is used to encrypt the payload and it is thrown away after communication is over

    This is crucial because it means that even if someone is storing your messages today to decrypt them in the future with a quantum computer they are unlikely to succeed if a sufficiently strong symmetric key is used. They will decrypt the initial messages of the handshake, see the messages used to negotiate the symmetric key, but they won’t be able to derive the key because as we said, it’s safe against eavesdropping.

    So a lot of today’s encrypted messages are safe. But in the future a quantum computer will be able to get the private key for the asymmetric encryption and perform a MitM attack or straight-up impersonate another entity. So we have to migrate to post-quantum algorithms before we get to that point.

    For storage, only symmetric algorithms are used generally I believe, so that’s already safe as is, assuming as always the choice of a strong algorithm and sufficiently long key.



  • This is really funny to me. If you keep optimizing this process you’ll eventually completely remove the AI parts. Really shows how some of the pains AI claims to solve are self-inflicted. A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.

    On this topic, here’s another common anti-pattern that I’m waiting for people to realize is insane and do something about it:

    • person A needs to convey an idea/proposal
    • they write a short but complete technical specification for it
    • it doesn’t comply with some arbitrary standard/expectation so they tell an AI to expand the text
    • the AI can’t add any real information, it just spreads the same information over more text
    • person B receives the text and is annoyed at how verbose it is
    • they tell an AI to summarize it
    • they get something that basically aims to be the original text, but it’s been passed through an unreliable hallucinating energy-inefficient channel

    Based on true stories.

    The above is not to say that every AI use case is made up or that the demo in the video isn’t cool. It’s also not a problem exclusive to AI. This is a more general observation that people don’t question the sanity of interfaces enough, even when it costs them a lot of extra work to comply with it.


  • It’s much more complicated than this. Given that models have been shown to spit out verbatim copies of some training material, it can be argued that the weights do in fact encode the material, just in some obfuscated way. Additionally, it can be argued that the output of the model is a derivative copy of the original work regardless of whether the original work can be “found inside” the model weights, just by the nature of the process. As of now, there is no precedent that I know of on whether this constitutes redistribution of copyrighted material.


  • How many months should he have waited for an authoritative response?

    Well, Marcan should wait as long as feels right to him. As I said previously, I’m pretty sure he was already pissed off about previous R4L issues and he didn’t quit because of this alone. I want to be clear that I’m commenting solely on the expectation of a swifter response from leadership in the original email thread and not on Marcan’s decision to step down, which I can’t be the judge of.

    So, I expect people in places of power to take their time when they respond publicly to issues like this, for various reasons. Eg:

    • they might try to resolve things in private first (seems to be the case)
    • they might want to discuss with their peers to double check their decision making and to take collective action, this is especially true if the CoC committee gets involved
    • they might want to chime in when people have calmed down and they expect to be able to have meaningful conversations with them

    At the very least, I would have waited to see what happens with the patches if I were in his position. The review process, which kept going in the meantime, essentially sets a timer for a decision to be made. In the end, Hellwig’s objections would either be acknowledged as blocking or they would be ignored. In any case there would have been a clear stance from the project’s leadership. It makes sense to me to wait for this inevitable outcome before making a committal decision such as stepping down.