• 0 Posts
  • 123 Comments
Joined 2 years ago
cake
Cake day: June 6th, 2023

help-circle
  • No, not really. “Casting” through the netflix app basically just turns your phone into a remote for your TV. The TV still plays videos from Netflix directly, using the Netflix app (or website). Casting using Google or Apple’s solution casts to a proprietary device with all the content protections functional, just like using the app on those devices.

    The content protections are bypassed way easier on a computer by using the website and some black magic. The removal/paywalling of casting is purely removing convenience from the user that had barely any financial impact on the company.



  • The difference is what code runs on your device. If proprietary libraries are included, F-Droid won’t build it, and it’s not allowed in their repository. There’s a lot to say about whether a FOSS app that relies on proprietary network services is truly “free”, there’s no arguing that an app with proprietary code blobs is “free”.

    Take for example an app like NewPipe. The application itself doesn’t include proprietary code, but it contacts YouTube, a proprietary Google service. With the app itself being open source, you can tell exactly what it is doing on your device, and what information is sent over the network. Comparing that to something like Signal, which includes proprietary Google libraries, you’d have to decompile and reverse engineer it to try and figure out what it’s doing.

    If you have a FOSS library that interacts with Google Play Services or microG to enable FCM, it would (probably) be allowed on F-Droid. (I’m not on their team, I can’t make a definitive statement about this).


  • “No Google Play services” falls under “app must be FOSS”. The average publicly developed open source app should not have much trouble getting into F-Droid if the developer wants to. Google Play services consists of several components, one of which is a proprietary library included in apps using it. If your app includes proprietary code, it is not FOSS.

    If Signal decided a build without proprietary blobs isn’t worth it, they’re not getting into F-Droid. Forks of Signal exist that remove the Google Play services build requirement, those are in F-Droid.





  • I’m going to assume you’re unable to see the embedded image. I didn’t add alt text, that’s my mistake.

    Below “Besides”, there is a screenshot of a tweet by user @haydendevs stating “this is who you’re arguing with online” and an attached image of a series of dots connected by lines. This is the (overused) visual representation of a “neural network” in machine learning. The meaning of the image in this context is to state you are arguing with bots or AI online. I used this twitter screenshot as an attempt to make a joke of the fact the OP reads like AI-generated text.

    I will edit the alt text in my comment above.


  • MPV is great, I use it all the time. It’s fully replaced VLC on my desktop.

    It is not an “alternative to Jellyfin”. It does not offer many “comfort features” like (synced ootb) watch tracking. It does not transcode at all, and it doesn’t even run on devices that need transcoding most, like smart TVs.

    These two applications fall into two different categories, and they will never replace each other. One is a media player, you throw mpv any video file, it puts it up on screen, great. The other is a media server, it allows you to sign in, browse your nicely organized library, and click play on the movie of your choice, very cool.

    Even the idea of opening SMB or NFS to the entire internet just so your most technical of friends can manually download and watch a movie is insane compared to setting up Jellyfin. Reminder, not everyone has the connection to stream a full 4k bluray rip, transcoding allows those users to watch at all.

    Besides,

    Screenshot of a tweet by user @haydendevs stating “this is who you’re arguing with online”, and an attached image of a series of dots connected by lines. This is the often used visual representation of a “neural network” in machine learning.



  • This is heavily sensationalized. UEFI “secure boot” has never been “secure” if you (the end user) trust vendor or Microsoft signatures. Alongside that, this ““backdoor”” (diagnostic/troubleshooting tool) requires physical access, at which point there are plenty of other things you can do with the same result.

    Yes, the impact is theoretically high, but it’s the same for all the other vulnerable EFI applications MS and vendors sign willy-nilly. In order to get a properly locked-down secure boot, you need to trust only yourself.

    When you trust Microsoft’s secure boot keys, all it takes is one signed EFI application with an exploit to make your machine vulnerable to this type of attack.

    Another important part is persistence, especially for UEFI malware. The only reason it’s so easy is because Windows built-in “factory reset” is so terrible. Fresh installing from a USB drive can easily avoid that.


  • Is there anything stopping viruses from doing virus things?

    Usually that’s called sandboxing. AUR packages do not have any, if you install random AUR packages without reading them, you run the risk of installing malware. Using Flatpaks from Flathub while keeping their permissions in check with a tool like Flatseal can help guard against this.

    The main difference is that even with the AUR being completely user submitted content, they’re centralized repositories, unlike random websites. Malware on the AUR is significantly less common, though not impossible. Using packages that have a better reputation will avoid some malware, simply because other people have looked at the same package.


    There is no good FOSS Linux antivirus (that also targets Linux). Clamav “is the closest”, though it won’t help much.


  • After GRUB unlocks /boot and boots into Linux proper, is there any way to access /boot without unlocking again?

    No. The “unlocking” of an encrypted partition is nothing more than setting up decryption. GRUB performs this for itself, loads the files it needs, and then runs the kernel. Since GRUB is not Linux, the decryption process is implemented differently, and there is no way to “hand over” the “unlocked” partition.

    Are the keys discarded when initramfs hands off to the main Linux system?

    As the fs in initramfs suggests, it is a separate filesystem, loaded in ram when initializing the system. This might contain key files, which can be used by the kernel to decrypt partitions during boot. After booting (pivoting root), the keyfiles are unloaded, like the rest of initramfs (afaik, though I can’t directly find a source on this rn). (Simplified explanation) The actual keys are actively used by the kernel for decryption, and are not unloaded or “discarded”, these are kept in memory.

    If GRUB supports encrypted /boot, was there a ‘correct’ way to set it up?

    Besides where you source your rootfs key from (in your case a file in /boot), the process you described is effectively how encrypted /boot setups work with GRUB.

    Encryption is only as strong as the weakest link in the chain. If you want to encrypt your drive solely so a stolen laptop doesn’t leak any data, the setup you have is perfectly acceptable (though for that, encrypted /boot is not necessary). For other threat models, having your rootfs key (presumably LUKS2) inside your encrypted /boot could significantly decrease security, as GRUB (afaik) only supports LUKS1.

    Or am I left with mounting /boot manually for kernel updates if I want to avoid steps 3 and 4?

    Yes, although you could create a hook for your package manager to mount /boot on kernel or initramfs regeneration. Generally, this is less reliable than automounting on startup, as that ensures any change to /boot is always made to the boot partition, not accidentally to a directory om your rootfs, even outside the package manager.


    If you require it, there are “more secure” ways of booting than GRUB with encrypted /boot, like UKIs with secure boot (custom keys). If you only want to ensure a stolen laptop doesn’t leak data, encrypted /boot is a hassle not worth setting up (besides the learning process itself).


  • The main oversimplification is where browsers “just visit websites”, SSH can be really powerful. You can send/receive files with scp, or even port forward with the right flags on ssh. If you stick to ssh user@host without extra flags, the only thing you’re telling SSH to do is set up a text connection where your keyboard input gets sent, and some text is received (usually command output, like from a shell).

    As long as you understand what you’re asking SSH to do, there’s little risk in connecting to a random server. If you scp a private document from your computer to another server, you’ve willingly sent it. If you ssh -R to port forward, you’ve initiated that. The server cannot simply tell your client to do anything it wants, you have to do this yourself.


    1. I do not personally have experience with this website
    2. Connecting to an SSH server with an SSH client is much like connecting to a webserver with a webbrowser. It is theoretically possible for bad things to happen, but automatic (“zero click”) attacks of any kind are difficult to pull off when the software is up to date. Most bad things that happen come from the user doing it themselves, like downloading and running untrusted programs, entering your password on a phishing site, etc.
    3. This is not necessary, given your host system is up to date.

    Note that my answer to 2 is heavily oversimplified, but applies in this scenario of SSH to “OverTheWire”.


  • Saving on some overhead, because the hypervisor is skipped. Things like disk IO to physical disks can be more efficient using multikernel (with direct access to HW) than VMs (which have to virtualize at least some components of HW access).

    With the proposed “Kernel Hand Over”, it might be possible to send processes to another kernel entirely. This would allow booting a completely new kernel, moving your existing processes and resources over, then shutting down the old kernel, effectively updating with zero downtime.

    It will definitely take some time for any enterprises to transition over (if they have a use for this), and consumers will likely not see much use in this technology.



  • SSH in from another machine, and sudo dmesg -w. If the graphics die, it can’t display new logs on the screen. If the rest of the system is fine, an open SSH session should give you more info (and allow you to troubleshoot further).

    You can also check if the kernel is still functional by using a keyboard with a caps-lock LED. If the LED starts flashing after the “freeze”, it’s actually a kernel panic. You’ll have to figure out a way to obtain the kernel panic information (like using tty1).

    After the “freeze”, try pressing the caps-lock key. If the LED turns on when pressing caps-lock, the Linux kernel is still functional. If the caps-lock key/LED does not work, the entire computer is frozen, and you are most likely looking at a hardware fault.

    From there, you basically need to make educated guesses of what to attempt in order to narrow down the issue and obtain more information. For example, try something like glxgears or vkgears to see if it happens with only one of those, or both (or neither).


  • it seems a bit pointless

    Quite the opposite. Linux is currently frequently matching Windows in performance when running games through Wine/Proton. Targeting Linux native avoids this translation layer, and can result in better performance or less CPU overhead for the same performance (which is noticable especially on devices like the Steam Deck).

    making games for Linux is ironically difficult

    Yes, because of the tooling. If you make a game in Unity, and build for Windows, ““things just work””. If you then build for Linux, you can face any number of random engine issues, like bad controller support, broken mouse grabbing, etc.

    as they can break as libraries change over time

    Valve has thought about this, and designed the Steam Linux Runtime. This does effectively the same thing as Flatpak, except it pulls in the system native graphics drivers. Steam Linux Runtime provides effectively a full (minimal) Linux distribution that game developers can target, ensuring their games keep running, even on more modern systems.


    Gaming on Linux has always been a chicken and egg problem. Gamers see there’s no games on Linux, so they stick to Windows. Developers see there’s no Linux gaming market, so they stick to Windows. With Valve’s Proton, they interrupted this cycle. Most games now work on Linux, but game developers haven’t switched yet. For them to switch, there needs to be a market of Linux users, and the tooling needs to be sufficiently developed for Linux, ensuring the same (or better) quality as the Windows versions of games. This includes game engines, common libraries (like online multiplayer frameworks or voicechat), and possibly development software, 3D modeling software like Blender, the Adobe suite, etc.


  • Security is an insanely broad topic. As an average desktop user, keep your system up to date, and don’t run random programs from untrusted sources (most of the internet). This will cover almost everyones needs. For laptops, I’d recommend enabling drive encryption during installation, though note that data recovery is harder with it enabled.