I’ve been setting up a new Proxmox server and messing around with VMs, and wanted to know what kind of useful commands I’m missing out on. Bonus points for a little explainer.

Journalctl | grep -C 10 'foo' was useful for me when I needed to troubleshoot some fstab mount fuckery on boot. It pipes Journalctl (boot logs) into grep to find ‘foo’, and prints 10 lines before and after each instance of ‘foo’.

  • ☂️-@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    2 hours ago

    ctrl+r on bash will let you quickly search and execute previous commands by typing the first few characters usually.

    it’s much more of a game changer than it first meets the eye.

  • InFerNo@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    2 hours ago

    I use $_ a lot, it allows you to use the last parameter of the previous command in your current command

    mkdir something && cd $_

    nano file
    chmod +x $_

    As a simple example.

    If you want to create nested folders, you can do it in one go by adding -p to mkdir

    mkdir -p bunch/of/nested/folders

    Good explanation here:
    https://koenwoortman.com/bash-mkdir-multiple-subdirectories/q

    Sometimes starting a service takes a while and you’re sitting there waiting for the terminal to be available again. Just add --no-block to systemctl and it will do it on the background without keeping the terminal occupied.

    systemctl start --no-block myservice

  • roran@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    3 hours ago
    cd `pwd`
    

    for when you want to stay in a dìr that gets deleted and recreated.

    cat /proc/foo/exe > program
    cat /proc/foo/fd/bar > file
    

    to undelete still-running programs and files still opened in running programs

  • qjkxbmwvz@startrek.website
    link
    fedilink
    arrow-up
    5
    ·
    5 hours ago

    nc is useful. For example: if you have a disk image downloaded on computer A but want to write it to an SD card on computer B, you can run something like

    user@B: nc -l 1234 | pv > /dev/$sdcard

    And

    user@A: nc B.local 1234 < /path/to/image.img

    (I may have syntax messed up–also don’t transfer sensitive information this way!)

    Similarly, no need to store a compressed file if you’re going to uncompress it as soon as you download it—just pipe wget or curl to tar or xz or whatever.

    I once burnt a CD of a Linux ISO by wgeting directly to cdrecord. It was actually kinda useful because it was on a laptop that was running out of HD space. Luckily the University Internet was fast and the CD was successfully burnt :)

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    4 hours ago

    parallel, easy multithreading right in the command line. This is what I wish was included in every programming language’s standard library, a dead simple parallelization function that takes a collection, an operation to be performed on the members of that collection, and optionally the max number of threads (should be the number of hardware threads available on the system by default), and just does it without needing to manually set up threads and handlers.

    inotifywait, for seeing what files are being accessed/modified.

    tail -F, for a live feed of a log file.

    script, for recording a terminal session complete with control and formatting characters and your inputs. You can then cat the generated file to get the exact output back in your terminal.

    screen, starts a terminal session that keeps running after you close the window/SSH and can be re-accessed with screen -x.

    Finally, a more complex command I often find myself repeatedly hitting the up arrow to get:

    find . -type f -name '*' -print0 | parallel --null 'echo {}'

    Recursively lists every file in the current directory and uses parallel to perform some operation on them. The {} in the parallel string will be replaced with the path to a given file. The '*' part can be replaced with a more specific filter for the file name, like '*.txt'.

    • ranzispa@mander.xyz
      link
      fedilink
      arrow-up
      3
      ·
      4 hours ago

      should be the number of hardware threads available on the system by default

      No, not at all. That is a terrible default. I do work a lot on number churning and sometimes I have to test stuff on my own machine. Generally I tend to use a safe number such as 10, or if I need to do something very heavy I’ll go to 1 less than the actual number of cores on the machine. I’ve been burned too many times by starting a calculation and then my machine stalls as that code is eating all CPU and all you can do is switch it off.

  • kittenroar@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 hours ago

    systemd-run lets you run a command under some limitations, ie

    systemd-run --scope -p MemoryLimit=1000M -p CPUQuota=20% ./heavyduty.sh
    
    • kittenroar@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 hours ago

      ulimit can also be used to define limits, but for a user rather than a process. This could protect you against, ie, a fork bomb

  • corsicanguppy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    Journalctl | grep -C 10 'foo' was useful for me when I needed to troubleshoot some fstab mount fuckery on boot.

    Ha! Remember back when there was no fstab fuckery? Good times. But you have a massive init blob slowly eating other services and replacing them with shitty replicants like this embarrassment (ohai root NFS) and all of us Unix people are chuckling in our reduced-fuckery ‘hell’.

  • Geodes_n_Gems@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    4 hours ago

    Running Wine is the command I’ve used the most probs, you can tell I haven’t touched the thing in months.

  • demonsword@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    6 hours ago

    Something that really improved my life was learn to properly use find, grep, xargs and sed. Besides that, there are these two little ‘hacks’ that are really handy at times…

    1- find out which process is using some local port (i.e. the modern netstat replacement):

    $ ss -ltnp 'sport = :<port-number>'

    2- find out which process is consuming your bandwidth:

    $ sudo nethogs

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      I always just do ss -ltnp | grep <port-number>, which filters well enough for my purposes and is a bit easier to remember…

  • Ftumch@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    10 hours ago

    Ctrl-z to suspend the running program.

    bg to make it continue running in the background.

    jobs to get an overview of background programs.

    fg to bring a program to the foreground.