Why is
catin youretcThat’s not where that goes, it goes in
/usr/bin/I spend a lot of time in /tmp sending temporary output to files and testing commands when building shell scripts. It’s appropriate that a long-haired fluffer butt lives there because that’s been most of my cats through the years.
Not pictured: /opt, the raccoon
Useless amount of copies of cat.
cp $(which cat) /*/Is it accurate?
Is your server not run by 6 cats?
My ethernet is cat 6.
Little kitties, in some boxes.
Little kitties, all the same.
There’s a white one, and an orange one, and a black one and a calico one,
and they are put in boxes,
and they all look just the same.
Now I’m craving milf weed 😁
There was a little captive cat, upon a crowded train, his mistress takes him from his box to ease his fretful pain…
Can anyone explain to me why it was so important to break the Linux file system?
Like I believe it was since literally every single distribution did it, but I don’t get why it was so important that we had to make things incompatible unless you know what you are doing.
The structure is defined by the Filesystem Hierarchy Standard 3.0, which could be implemented differently depending on the distro. /bin is usually a symlink pointing to /usr/bin.
See also (if you’re curious) two distros that purposefully don’t follow the FHS for one reason or another: GoboLinux and NixOS (there are probably others)
I love how in the first page of chapter 2 they specify the distinction of files in two classes: shareable and variable. Then they specify that files which differ in any of these two properties should belong to a different root folder. Then they go ahead and give you an example which clearly explains that
/varshould contain both shareable and non shareable files.Good job with your 4 categories, I guess that’s why nobody really understands what
/varis…
The original reasoning for having all those directories was because some nerds in a university/lab kept running out of HD space and had to keep changing the filing system to spread everything out between an increasing number of drives.
Noobs should’ve just used zfs
/home because you want to save the user files if you need to reinstall.
/var and /tmp because /var holds log files and /tmp temporary files that can easily take up all your disk space. So it’s best to just fill up a separate partition to prevent your system from locking up because your disk is full.
/usr and /bin… this I don’t know
/var holds log files
Not just log files, but any variable/dynamic data used by packages installed on the system: caches, databases (like /var/lib/mysql for MySQL), Docker volumes, etc.
Traditionally, /var and /home are parts of a Linux server that use the most disk space, which is why they used to almost always be separate partitions.
Also /tmp is often a RAM disk (tmpfs mount) these days.
And in immutable distros, one of the few writable areas
True.
I would think putting /bin and /lib on the fastest thing possible would be nice 🤷
I do not think that matters so much, I guess it just affects the speed at which you load your software into ram, but once it is loaded the difference in running the software should be pretty small. That’s unless you call a command thousands of times per second, in that case it may improve performance. The fastest drive should generally be reserved to storing software input and output, as that’s where generally drive speed affects the most execution time. Meaning if your software does a blocking read that read will be faster and thus the software will proceed running quicker after reading. Moreover, software input in general tends to be larger than the binaries; unless we’re talking about an electron based text editor.
Could you not just use subdirectories?
They are subdirectories?!
Ok technically but why couldn’t we keep a stable explicit hierarchy without breaking compatibility or relying on symlinks and assumption?
In other words
Why not /system/bin, /system/lib, /apps/bin.
Or why not keep /bin as a real directory forever
Or why force /usr to be mandatory so early?
Because someone in the 1970s-80s (who is smarter than we are) decided that single-user mode files should be located in the root and multi-user files should be located in
/usr. Somebody else (who is also smarter than we are) decided that it was a stupid ass solution because most of those files are identical and it’s easier to just symlink them to the multi-user directories (because nobody runs single-user systems anymore) than making sure that every search path contains the correct versions of the files, while also preserving backwards compatibility with systems that expect to run in single-user mode. Some distros, like Debian, also have separate executables for unprivileged sessions (/binand/usr/bin) and privileged sessions (i.e. root,/sbinand/usr/sbin). Other distros, like Arch, symlink all of those directories to/usr/binto preserve compatibility with programs that refer to executables using full paths.But for most of us young whippersnappers, the most important reason is that it’s always been done like this, and changing it now would make a lot of developers and admins very unhappy, and lots of software very broken.
The only thing better than perfect is standardized.
The move to storing everything in
/usr/binrather than/binetc? I think it actually makes things more compatible, since if you’re a program looking for something you don’t need to care whether the specific distro decided it should go in/usr/binor/bin.
Who puts /etc on a separate drive?










