I’ve been setting up a new Proxmox server and messing around with VMs, and wanted to know what kind of useful commands I’m missing out on. Bonus points for a little explainer.
Journalctl | grep -C 10 'foo' was useful for me when I needed to troubleshoot some fstab mount fuckery on boot. It pipes Journalctl (boot logs) into grep to find ‘foo’, and prints 10 lines before and after each instance of ‘foo’.
I like emerge --moo, just to See how larry is doing. Only gentoo tho :(
The watch command is very useful, for those who don’t know, it starts an automated loop with a default of two seconds and executes whatever commands you place after it.
It allows you to actively monitor systems without having to manually re-run your command.
So for instance, if you wanted to see all storage block devices and monitor what a new storage device shows up as when you plug it in, you could do:
watch lsblkAnd see in real time the drive mount. Technically not “real time” because the default refresh is 2 seconds, but you can specify shorter or longer intervals.
Obviously my example is kind of silly, but you can combine this with other commands or even whole bash scripts to do some cool stuff.
There are a lot of great commands in here, so here are my favorites that I haven’t seen yet:
- crontab -e
- && and || operators
- “>” and >> chevrons and input/output redirection
- for loops, while/if/then/else
- Basic scripts
- Stdin vs stdout vs /dev/null
Need to push a file out to a couple dozen workstations and then install it?
for i in $(cat /tmp/wks.txt); do echo $i; rsync -azvP /tmp/file $i:/opt/dir/; ssh -qo Connect timeout=5 $i “touch /dev/pee/pee”; done
Or script it using if else statements where you pull info from remote machines to see if an update is needed and then push the update if it’s out of date. And if it’s in a script file then you don’t have search through days of old history commands to find that one function.
Or just throw that script into crontab and automate it entirely.
I’m a big enjoyer of pushd and popd
so if youre in a working dir and need to go work in a different dir, you can pushd ./, cd to the new dir and do your thing, then popd to go back to the old dir without typing in the path again
Nice! I didn’t know that one.
You can also cd to a directory and then do
cd -to go to the last directory you were in.
find /path/to/starting/dir -type f -regextype egrep -regex 'some[[:space:]]*regex[[:space:]]*(goes|here)' -exec mv {} /path/to/new/directory/ \;I routinely have to find a bunch of files that match a particular pattern and then do something with those files, and as a result,
findwith-execis one of my top commands.If you’re someone who doesn’t know wtf that above command does, here’s a breakdown piece by piece:
find- cli tool to find files based on lots of different parameters/path/to/starting/dir- the directory at which find will start looking for files recursively moving down the file tree-type f- specifies I only wantfindto find files.-regextype egrep- In this example I’m using regex to pattern match filenames, and this tellsfindwhat flavor of regex to use-regex 'regex.here'- The regex to be used to pattern match against the filenames-exec-execis a way to redirect output in bash and use that output as a parameter in the subsequent command.mv {} /path/to/new/directory/-mvis just an example, you can use almost any command here. The important bit is{}, which is the placeholder for the parameter coming fromfind, in this case, a full file path. So this would read when expanded,mv /full/path/of/file/that/matches/the/regex.file /path/to/new/directory/\;- This terminates the command. The semi-colon is the actual termination, but it must be escaped so that the current shell doesn’t see it and try to use it as a command separator.
fabien@debian2080ti:~$ history | sed 's/ ..... //' | sort | uniq -c | sort -n | tail # with parameters 13 cd Prototypes/ 14 adb disconnect; cd ~/Downloads/Shows/ ; adb connect videoprojector ; 14 cd .. 21 s # alias s='ssh shell -t "screen -raAD"' 36 node . 36 ./todo 42 vi index.js 42 vi todo # which I use as metadata or starting script in ~/Prototypes 44 ls 105 lr # alias lr="ls -lrth" fabien@debian2080ti:~$ history | sed 's/ ..... //' | sed 's/ .*//' | sort | uniq -c | sort -n | tail # without parameters 35 rm 36 node 36 ./todo 39 git 39 mv 70 ls 71 adb 96 cd 110 lr 118 viSearch for github repos of dotfiles and read through people’s shell profiles, aliases, and functions. You’ll learn a lot.
I use $_ a lot, it allows you to use the last parameter of the previous command in your current command
mkdir something && cd $_
nano file
chmod +x $_As a simple example.
If you want to create nested folders, you can do it in one go by adding -p to mkdir
mkdir -p bunch/of/nested/folders
Good explanation here:
https://koenwoortman.com/bash-mkdir-multiple-subdirectories/qSometimes starting a service takes a while and you’re sitting there waiting for the terminal to be available again. Just add --no-block to systemctl and it will do it on the background without keeping the terminal occupied.
systemctl start --no-block myservice
I really hope I remember this one long enough to make it a habit
I have my .bashrc print useful commands with a short explanation. This way I see them regularly when I start a new session. Once I use a command enough that I have it as part of my toolkit I remove it from the print.
ps -ef | grep <process_name
Kill -9 proces id
I googled that -15 is better, I forgot what -9 even did, been using it for years.
Maybe I interest you in pgrep? pkill? killall?
The number is the signal you send to the program. There’s a lot of signals you can send (not just 15 and 9).
The difference between them is that 15 (called SIGTERM) tells the program to terminate by itself (so it can store its cached data, create a save without losing data or corrupting, drop all its open connections gracefully, etc). 9 (called SYGKILL) will forcefully kill a program, without waiting for it to properly close.
You normally should send signal 15 to a program, to tell it to stop. If the program is frozen and it’s not responding or stopping, you then send signal 9 and forcefully kill it. No signal is “better” than the other, they just have different usecases.
ctrl+r on bash will let you quickly search and execute previous commands by typing the first few characters usually.
it’s much more of a game changer than it first meets the eye.
And I believe shift+r will let you go forward in history if you’re spamming ctrl+r too fast and miss whatever you’re looking for
Not a command but the tab key for auto complete. This made it much easier for me.
ripgrep
parallel, easy multithreading right in the command line. This is what I wish was included in every programming language’s standard library, a dead simple parallelization function that takes a collection, an operation to be performed on the members of that collection, and optionally the max number of threads (should be the number of hardware threads available on the system by default), and just does it without needing to manually set up threads and handlers.inotifywait, for seeing what files are being accessed/modified.tail -F, for a live feed of a log file.script, for recording a terminal session complete with control and formatting characters and your inputs. You can then cat the generated file to get the exact output back in your terminal.screen, starts a terminal session that keeps running after you close the window/SSH and can be re-accessed withscreen -x.Finally, a more complex command I often find myself repeatedly hitting the up arrow to get:
find . -type f -name '*' -print0 | parallel --null 'echo {}'Recursively lists every file in the current directory and uses parallel to perform some operation on them. The
{}in the parallel string will be replaced with the path to a given file. The'*'part can be replaced with a more specific filter for the file name, like'*.txt'.I can recommend tmux also as an alternative to screen
should be the number of hardware threads available on the system by default
No, not at all. That is a terrible default. I do work a lot on number churning and sometimes I have to test stuff on my own machine. Generally I tend to use a safe number such as 10, or if I need to do something very heavy I’ll go to 1 less than the actual number of cores on the machine. I’ve been burned too many times by starting a calculation and then my machine stalls as that code is eating all CPU and all you can do is switch it off.
when I forget to include sudo in my command:
sudo !!To add to this one, it also supports more than just the previous command (which is what !! means), you can do like
sudo !453to run command 453 from your history, also supports relative like!-5. You can also use without sudo if you want which is handy to do things like!lsfor the last ls command etc. Okay one more, you can add:pto the end to print the command before running it just in case like!systemctl:pwhich can be handy!I forget where I got it. But mine will do this if I double tap ESC after I sent the command without sudo. Very useful.
I should probably figure out what it was I added to do this.
Doesn’t issue the command. Have to hit enter. Useful to verify it’s the right command first.
With the way bash history can work Id be worried about running sudo rm -rf ./* by mistake.
Also if you make a typo you can quickly fix it with ^, e.g.
ls /var/logs/apache^logs^logAnd if an argument recurs, global replacement is:
^foo^bar^:&
Similar-ish for quickly editing last command:
fc
I learned about this through Bread On Penguins, she did a vid on useful commands
with zsh, you can use it, and then press space to have the !! replaced by the previous command to be able to edit it :)
ncis useful. For example: if you have a disk image downloaded on computer A but want to write it to an SD card on computer B, you can run something likeuser@B: nc -l 1234 | pv > /dev/$sdcardAnd
user@A: nc B.local 1234 < /path/to/image.img(I may have syntax messed up–also don’t transfer sensitive information this way!)
Similarly, no need to store a compressed file if you’re going to uncompress it as soon as you download it—just pipe
wgetorcurltotarorxzor whatever.I once burnt a CD of a Linux ISO by
wgeting directly tocdrecord. It was actually kinda useful because it was on a laptop that was running out of HD space. Luckily the University Internet was fast and the CD was successfully burnt :)











