• 4 Posts
  • 36 Comments
Joined 5 years ago
cake
Cake day: April 27th, 2021

help-circle

  • Gentoo user…but assuming now it’s about building on distributions that don’t automate it like gentoo.

    Disadvantages:

    • No easy way to uninstall again. Some build systems generate lists of files that were installed or even some uninstall rules but that requires to either keep the build directory with the source-code around or to make backups of the necessary build files for proper uninstall. And in some build systems there are no helpers for uninstalling at all.
    • Similar…updating doesn’t guarantee to remove all traces of the previous version. If the new build overwrites every file of the previous version…fine. But if an update version doesn’t need files anymore that were installed in previous versions those will usually not get remove from your system and stick around.
    • In general lack of automatism for updates
    • Compiling takes time
    • You are responsible for dealing with ABI breakages in dependencies. In most cases the source-code you compile will depend on other libraries. Either those come from your distro or you also build them from source…but in both cases you are responsible for rebuilding a package if an update to one of the dependencies breaks the ABI.
    • You need build-time dependencies and depending on your distro also -devel packages installed. If source-code you install needs an assembler to build you have to install that assembler which wouldn’t be necessary if you installed the binary (You can remove those build-dependencies again of course until you need to rebuild). Similar for -devel packages for libraries from your distro…if the source-code depends on a library coming from your distro it also needs the header files, pkgconfig files and other development relevant files of that library installed which many distros split out in own -devel packages and that aren’t necessary for binaries.
    • You have to deal with compile flags and settings. It’s up to you to set the optimization level, architecture and similar for your compiler in environment variables. Not a big deal but still something someone has to look into at the start.
    • You have to deal with compile-time options and dependencies. The build-systems might tell you what packages are missing but you have to “translate” their errors into what to install with your package manager and do it yourself. Same for the detection of the build systems…you have to read the logs and possibly reconfigure source-code after installing some dependencies if he build systems turned off features you want because of lacking dependencies.
    • Source-code and building need disk space so make sure you have enough free. Similar with RAM…gentoo suggests 2GB of ram for each --job of make/ninja but that’s for extreme cases, you usually can get away with less than 2GB per job.

    Of course you also gain a lot of advantages…but that wasn’t asked ;)

    You can “escape” most of the mentioned disadvantages by using a distro like gentoo that automates much of this. It’s probably worth a look if you plan on doing this regularly.

    edit:typos


  • Yep, I really don’t understand why people use baloo without content indexing…if you do that other means like your fd or even mlocate will probably be better solutions if all you need is filename search. KDE integration is really the only advantage left then…and I don’t really see much need of creating bookmarks/folderviews with filename searches, you hardly ever have reoccurring searches for the same filenames.

    Baloo only makes sense to use with content indexing in my view…and there it hardly has any equal. I personally can’t be without this feature anymore. I use it actively since KDE4 days (anyone remembering nepomuk?) and my whole workflow is built on it.


  • For me the real advantages of baloo are metadata search and KDE integration.

    Searching for tags with baloosearch6 tag:<tag> is something I use rather often, I even use the star ratings in baloosearches with rating>=6. Combine that with a mimetype and and you have a quick playlist of all music you rated with 4 or more stars in dolphin: baloosearch rating>=8 AND type:audio.

    I also using baloosearch for images…the width and height keys are really useful for finding textures with specific dimension…something like baloosearch type:image AND Width>=2048 AND Height>=2048

    And the of course the KDE integration that makes this really useful…you can use baloosearch queries everywhere in KDE, in open-file dialogs, as bookmarks in dolphin or file-dialogs, for desktop widgets showing folders…you can easily create an activity that has several folder-views on the desktop each showing a different set of files with specific tag…so left folder-view showing all files tagged “WIP” while right folder-view shows all files tagged “Finished” (To use queries in KDE you need them in the form baloosearch:/?querry=<the querry as you would use it in balooserarch6>

    Edit:I wrote a reddit post some years ago about this…hope linking reddit is okay here: https://www.reddit.com/r/kde/comments/pmcshj/tip_baloosearch_kioslave/


  • You can set most KDE menus to show the “Comment” key of the .desktop files instead of the “Name” key. So “KDE Advanced Text Editor” instead of “Kate”.

    Packages can come with several “programs” that aren’t necessarily named the same as the package. Example: Calibre installs menu items for “Calibre”, “EBookViewer” and “EBookEditor” on my distro.

    It’s not about forgetting…it’ about helping to quickly find what you just installed and what is all included.


  • I was wondering too why anyone would ever want this…but the proposal explains it:

    Support for UEFI on MBR was originally added in blivet#764 to accommodate cloud image use cases, such as AWS, which at the time did not support UEFI booting on GPT disks. These constraints no longer apply to modern cloud platforms, making MBR-based UEFI setups unnecessary for current Fedora deployments.

    So basically it was some workaround a few years ago. I have a hard time to see any reason speaking against the removal.


  • On-screen keyboard was already mentioned, but there are some other small things that might be useful for some:

    Reboot/shutdown without having to login (Your husband/wife/partner can shutdown your computer without first having to login and be greeted by the porn folder on your desktop…nah seriously, this can be useful at times when your turn on the computer, get called away and someone else can easily shut down the computer after you didn’t return for some hours)

    Keyboard language selection before password entry. Very useful in multi-language households/companies.

    The WM selection also allows kiosk-like behaviour in special cases…like you don’t start a WM but start in kodi media player for a movie evening or you create your own WM session file for a single game that runs as soon as you login.


  • It not being worth using is good, I want this malware practice to die.

    Which is a noble goal in my view, I completely agree. But you will not be able to use anything like wine to achieve this…anti-cheat software is specifically designed to prevent all the things that wine does (For the reason that there is no technical difference between a “cheat program” and what wine does)

    i’m sure there are already workarounds on windows, it’s not like cheating has been eliminated there

    Actually I read about an interesting way a few months ago…on games that enabled linux support in their anti-cheat systems windows “cheats” started spoofing the OS signature to make the anti-cheat system think it runs on wine and turning of the kernel-level anticheat…

    But as I said, the effectiveness of anit-cheat is a different discussion independent of the question if wine will support them. Even if anti-cheat systems are ineffective it doesn’t change that they are mainly aimed at stopping exactly the kind of “trickery” wine does. Wine would have to play the same cat and mouse game with anti-cheat as cheats do…if it finds a away to work around them the anti-cheat systems need to find a way to prevent that workaround.


  • If it could the anti-cheat system wouldn’t be worth using. Being able to “trick” the anti-cheat system into thinking something else is going on than actually happens is the same an actual “cheat” would do. That’s why kernel level anti-cheat system go though a lot of trouble to detect any kind of virtualization or similar tricks…the moment you could trick them into accepting a fake kernel is also the moment that fake kernel can pretend the fake input it generates actually comes from a real mouse or the checksum of that openGL/vulkan library is exactly the one expected and not the one of some altered libraries that “accidentally” forget to not render stuff behind walls…

    It’s also something that needs to be kept in mind when talking about “Companies can just enable the linux support in their anti-cheat systems but they don’t.” While this is true of course it also means the kernel-level anti-cheat systems are bared from kernel-access and degraded to user-space only. And as people have access to the source-code of the linux kernel nothing is stopping anyone from just modifying the kernel to…give more “favorable results” while playing the game. Of course the linux playerbase it too tiny to really offer a market for such cheats…but it’s not completely unreasonable to not want to erode the capabilities of your anti-cheat system (That is of course if you believe they work in the first place…but that’s a different discussion).


  • Just to make this clear (Sorry if it’s unnecessary, but maybe still useful info for others)…Path= lines in .desktop files are not related at all to the $PATH environment variables. They do something completely different (And yes, picking Path as key was a terrible choice in my view). Path= lines in .desktop files change the current working directory…they do about the same as a cd <directory> in a shell.

    They do not change where a .desktop file looks for executables…only indirectly if a executable runs another file relative to the current directory or looks for images/icons/audio/other data relative to the current working directory.

    And I have no clue why it doesn’t work with TryExec…the desktop file spec doesn’t mention anything about that :( ( https://specifications.freedesktop.org/desktop-entry-spec/latest/recognized-keys.html )


  • Try adding a PATH=/home/werecat/Grayjay line to your .desktop file. Without it the application will run with your home directory as your working-directory…and there the data files are missing (Why you need to copy them to your home). The path entry makes the program work in /home/werecat/Grayjay where the data directories actually are.

    Edit: That is assuming when you started it manually you did a cd Grayjay and a ./Grayjay or similar. So you changed your working directory there first before starting it. If that is not the case ignore my post ;)








  • As gentoo user I can’t argue with that… ;)

    But I think there are reasons why someone would want to build suckless tools manually…namely that their configuration is mostly done in the source-code (Damn, it’s so hard to not write anything too opinionated about suckless but I really try my best). But even then I agree with your other post that it’s far better to use the distro facilities for building the the distro source packages just with your own patches applied.


  • Lets set aside my personal belief that suckless is a satire that too many people started to take seriously…

    Always using the latest git version as done in the article doesn’t strike me as the most sane thing to do if you “just” want to use the software especially as suckless offers version tarballs.

    But suggesting sudo make clean install to build is really not okay…(and also not how the suckless tools I checked suggest it). You cloned (or better extracted the tarballs) as user…there is not a single reason to build the software as root. If you have to install then do it in two steps, build as user and only “make install” as root.


  • Aiwendil@lemmy.mltoLinux@lemmy.mlDistro recommendation
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    4 years ago

    “Distro recommendation” questions aren’t usually very useful…all you get is everyone recommending the distro they use. It’s unlikely you can get anything useful out of the answers.

    I wanted something with support and with people that care for the code

    Applies to pretty much every major linux distro that isn’t a derivative and also some of the derivatives that do more than just add some cosmetics (unless you specifiy a bit more in details what you mean with “care for the code”).

    Also all distros can be configured, there is no real reason to switch from something like ubuntu because you don’t like how the “Files” manager works to another distro…you could get pretty much the same on ubuntu as other distros offer and in most cases easier than by doing a reinstall. Really, you are better off trying to fix an issue you have on one distro that distro hop at every little problem you run into…


  • I guess a mixture of POSIX compatibility, backward compatibility and non-interactive shell use-cases.

    Being somewhat POSIX compatible offers a way to write scripts that work on many systems independent of the actual shell implementation (bash, dash, zsh…). But this means major overhauls of the shell “language” are out of question…

    Backward compatibility gets important for things that ignored the first point and used features only available in bash. Given that bash is the default for 30 years for linux now there are probably plenty of examples.

    And while bash is not the smallest shell it is also not the largest one…and rather configurable at compile-time when it comes to supported features. This makes it a viable option as “shell-script” interpreter for systems that hardly have any interactive shell usage. It’s not a completely bare-bone shell so you get a bit of “comfort” for scripts but you can remove unnecessary things like interactive command line editing with lib readline…I can imagine some embedded systems find uses for such a shell.

    And it’s not that there aren’t alternatives…Microsoft’s Powershell is probably the most successful one “recently”. But changing all existing “workflows” from a text-based one to an object based one is not a trivial task…and in addition you run in new problems with any new shell design (For example I really dislike the overly verbose interactive useage of powershell but that’s rather subjective)