How can you be sure it doesn’t affect popular images? The probability may be lower, but I don’t think you can rule it out.
At https://blog.frehi.be/2023/04/23/the-security-risks-of-flathub/ someone has published an article about Flathub in which he addresses a few problems.
Therefore, the answer is that Flathub is not always safe to use. However, I do not know of any package source that is always safe to use. Is Flathub more insecure than other package sources? I can’t answer that because I don’t use solutions like Flatpak, AppImage etc. myself.
Therefore, from my point of view, the disadvantages outweigh the advantages. Therefore, I do not have such a tool permanently installed, neither under Linux nor under Windows. However, every 6 months I scan my Windows installation with a USB-bootable virus scanner. No actually harmful programme has been found for years.
In my opinion, the following things are much more important than any security software.
Especially the last point is a problem for many users. I can’t tell you how many times I’ve witnessed someone receiving an alleged invoice from mobile provider A by email and opening it, even though they had a contract with provider B.
Ran sudo pacman -Syu; sudo pacman -Syy like I do every few days
Syy forces the package database to be updated even if no updates are available.
In my opinion, this makes no sense, especially after you have already run pacman -Syu before. Basically, you only generate additional, unnecessary traffic on the mirror you are using. Pacman -Syu is normally always sufficient.
The journal was really long so I moved past it
The display of the systemd journal can be easily filtered. For example, with journalctl -p err -b -1
, all entries of the last boot process that are marked as error, critical, alarm or emergency are displayed.
Has anyone else ran into this issue when updating?
Not me. But other users do. Some of them also use a distribution other than Arch (or a distribution based on it). When I look at the problems, the current kernel is probably quite a minefield as far as problems are concerned.
Any advice for preventing future crashes or issues like this so I don’t fear updating?
As other users have already recommended, you could additionally install the LTS kernel. And if you use BTRFS as a file system, create snapshots before an update (https://wiki.archlinux.org/title/snapper#Wrapping_pacman_transactions_in_snapshots).
And it should be obvious that important data should be backed up on a regular basis.
When it comes to SBC, the choice has always been a Raspberry Pi. Why? A Raspberry Pi may not have the best performance. But in return you can be sure that it will still be supported after a kernel update. And that is exactly the problem with many alternatives. They support a certain, mostly old, kernel. And that’s it. Furthermore, the community around the Raspberry Pi is simply huge.
I am using Borg for years. So far, the tool has not let me down. I store the backups on external hard drives that are only used for backups. In addition, I save really important data at rsync.net and at Hetzer in a storage box. Which is not a problem because Borg automatically encrypts locally and for decryption in my case you need a password and a key file.
Generally speaking, you should always test whether you can restore data from a backup. No matter which tool you use. Only then you have a real backup. And an up-to-date backup should always additionally be stored off-site (cloud, at a friend’s or relative’s house, etc.). Because if the house burns down, the external hard drive with the backups next to the computer is not much use.
By the way, I would advise against using just rsync because, as the name suggests, rsync only synchronizes, so you don’t have multiple versions of a file. Which can be useful if you only notice later that a file has become defective at some point.
That might be some reasons why the post got some downvotes.
GNU password store
The tool, unless something has changed in the meantime, has one major drawback for me. The filename of the encrypted files is displayed in plain text. However, I don’t want people to be able to see, for example, which Internet sites I have an account with. Sure you can name the files otherwise. But how should I remember for example that the file dafderewrfsfds.gpg contains the access data for Mastodon?
In addition, I miss with pass some functions. As far as I know, you can’t save file attachments. Or define when a password expires. And so on. Pass is therefore too KISS for me.
Pgp+git and a nice cli to wrap them onto an encrypted password store that’s pretty easy to move around these days.
A matter of opinion, I would say. I prefer my Keepass file which I can access via my Nextcloud instance or which is stored on a USB stick on my keychain.
By the way, the file is secured with a Yubikey in addition to a Diceware password. So saving it in the so-called cloud is no problem. Just as a note, in case someone reading my post wants to make smart remarks about the cloud.
That would be my recommendation as well. I’ve been using a Zowie mouse on Linux for years now.
However, the switches with which you can make the changes are at the bottom of the mouse. Changing the DPI, for example, with one click is therefore not possible. For some users, this is apparently a problem, for whatever reason.
You have to add a line in fstab with the right parameters though…
You can also mount NTFS partitions manually as needed.
I can’t really use NTFS because Linux can’t write to it.
This is not correct.
For example, there is the driver ntfs-3g. This allows read and write access to NTFS partitions. The disadvantage is that it uses FUSE and is therefore slower in some cases.
Since kernel 5.15, read and write access is also offered by the drivers provided by Paragon (ntfs3).
https://wiki.archlinux.org/title/NTFS
Because I personally use btrfs as file system for Linux, I use WinBtrfs under Windows.
ExtFAT would also be a possibility. However, one should be aware that the file system was originally designed only for flash memory storage such as USB sticks.
Unbound can be configured to make requests directly to the DNS “root server” . These should not be censored. The guide linked by surfbum explains this accordingly.
For me, this is the main reason why I use micro. And because I don’t like the handling of vim. Funnily enough, I’ve been playing around with Helix for a while now and I really like the editor, even though it’s a modal editor, just like vim. Maybe because of the selection → action model. The question is, do I like Helix better than micro? I still have to answer that question for myself at some point.
Unfortunately, as always, the best solution does not exist. Everything has its advantages and disadvantages.
For example, I like Chezmoi for managing configuration files. But the tool is only for the configuration files in /home.
Ansible, on the other hand, can be used for / and /home. But already the basic functions are more complex which requires some training time.
according to StatCounter’s data
Our tracking code is installed on more than 1.5 million sites globally.
Such statistics are always to be taken with a grain of salt.
There are more than 1.5 billion websites worldwide. Statcounter therefore covers only a small fraction of them. So chances are good that you as a Linux user do not use any of these 1.5 million websites that Statcounter uses to create their statistics.
Furthermore, I suspect that many Linux users use tools like uBlock Origin or Pi-Hole, so that the things that are used to track users are blocked.
Apart from that, I have several Linux installations with which I never access a website. Sometimes they have no direct connection to the Internet. Thus, they are also not recorded.
But now to the most important. 3 percent of what? Percentage numbers don’t tell anything if you don’t know the number of users behind them. Let’s assume that there were 2.8 percent Linux users in May. In June, only 2.6 percent. Nevertheless, it is possible that there were more actual users in June if the total number of all users increased accordingly.
If I understood it right, the author of the proposal even writes that that opt-in is useless, because nobody is going to enable it, which kinda makes it sound like they know that they’re trying to push something on users that they don’t want.
The question is, why don’t users want it? I have already had a few discussions on the subject of telemetry and telemetry has almost always been portrayed as evil. Even when, for example, the transmission is encrypted and only the most necessary data is transmitted in such a way that no conclusion can be drawn about a specific user.
Is opt-out therefore a good solution? Not in my opinion. But I can understand the developers who use opt-out, for the reasons I mentioned. Because yes, telemetry can help to improve a program.
As far as AUR is concerned, one should be fair. The things that are offered in AUR can be problematic in general. No matter if you use vanilla Arch or a distribution based on Arch. Because not everyone who offers something in the AUR cares about updates in a timely manner or at all.
There is definitely a reason why https://lists.archlinux.org/archives/list/aur-requests@lists.archlinux.org/ exists. Just as there is a reason why there is a general warning about the AUR (https://wiki.archlinux.org/title/Arch_User_Repository).
With Manjaro, I rather see the problem that the team responsible for it apparently does not learn from its mistakes, so that, for example, the SSL certificate of the website has not been renewed several times (https://web.archive.org/web/20230706060943/https://manjarno.snorlax.sh/). That may not be a big problem in itself, but if even such little things go wrong, then I personally cannot trust an entire distribution.
I have several virtual machines here with Arch that I often don’t use for months. And when I do use them, I proceed as I do with every update. So before an update, I check if something has been published at https://archlinux.org/news/ that affects the installation in question. This is done automatically with the help of the tool informant. If something has been published that affects my installations, I take that into account. Otherwise I run pacman -Syu
as usual. And that’s it.
Compared to nothing. I have used Nvidia graphics cards under Linux for many years. The last one was a GTX 1070. In order for the cards to work, I had to install the driver once with the command
pacman -S nvidia-dkms
. So the effort was very small.By the way, I am currently using a 6800 XT from AMD. I therefore don’t want to defend Nvidia graphics cards across the board.
Unfortunately, when it comes to Nvidia, many people do not judge objectively. Torvalds’ “fuck you”, for example, referred to what he saw as Nvidia’s lack of cooperation with the kernel developers. And i think he was right. But it was never about how good or bad the graphics cards were usable under Linux. Which, unfortunately, many Linux users claim. Be it out of lack of knowledge or on purpose.
Since then, some things have changed and Nvidia has contributed code to several projects like Plasma or Mesa to improve the situation regarding Wayland.