• 1 Post
  • 230 Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle
  • BananaTrifleViolin@lemmy.worldtoComic Strips@lemmy.worldDoctors
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    27
    ·
    edit-2
    11 days ago

    Not really - if a woman came in with a gunshot wound, she’d be asked if she was pregnant. Why? Because she’d need a CT scan or an X-ray, which are ionizing radiation and have a risk for a foetus. She’d need a scan or x-ray to ensure there was no shrapnel in the would before closure even if superficial, and to assess for damage to vessels or bone etc if deep wound.

    It’s a standard question that any women would recognise from trips to the emergency room. It’s pretty ineffective as a punchline if the cartoonist is trying to make the point you say they’re making.

    Instead it just makes the woman in the cartoon appear dumb/ignorant which totally undermines the message it’s purportedly trying to put across. She is giving a fed up or even patronising look over something that would be essential question in any hospital.


  • Out of interest, which aspect don’t you believe? The article is clear the broken update effects a specific subset of enterprise users, on a specific mix of base versions and cumulative updates.

    This seems like a classic windows update issue. In fairness to Microsoft it is difficult to prevent bugs when there is a huge install base, with a huge range of hardware, with a huge range of users on different mixes of updates and updating at their own. I personally think that’s totally believable.

    What’s not clear is perhaps the implied overarching story that W11 is worse for this than other versions of Windows. I can’t answer that about windows updates themselves, but I certainly believe W11 is the worst version of Windows I’ve ever used (and I’ve used every version back to 3.11 as a kid). I have to use W11 at work: the UI is absolutely terrible and unfriendly but far worse it constantly and inexplicably slows down, programs become unresponsive repeatedly and I come across errors constantly.

    I work in a big organisation and I don’t even bother to report most errors now - we hop between PCs because of the nature of my Job, and I’ve come up across so many I just can’t be bothered opening more tickets. I’d describe it as a mostly large volume of minor issues and inconveniences that cumulatively, on top of the bad design, that make it a shit experience. But I’ve also had numerous major errors since we moved from W10 to W11 on different PCs - they all have the same hardware and software yet the problems are different on each. I’ve given up reporting the problems and just avoid the PCs, and I think a lot of my colleagues are the same.

    My organisation (I work in a large Hospital), is already stretched due to high work volume and low staffing and we now have a constantly little drag from Windows 11 on everything we do. It’s like Microsoft sprinkle a little bit of shit onto every computer, every day, all day. The cumulative effect in just my organisation must be massive - I shudder to think how bad it is across the whole economy.






  • Yes it’s fairly simple to do, essentially the user needs to download an image of a Linux install disc, flash it onto a USB stick (or a Dvd I guess), and then reboot their PC. They may need to press a key at boot to open the boot menu and select the USB (or the bios to change the boot order).

    After that, most distros offer a very easy to follow installer which will install the new OS.

    Most Linux installs can be done alongside windows (on the same hard drive or it’s own drive) but windows tends to break the boot loader with updates. It’s gernallt better to only dual boot if you’re good at fixing things - otherwise a full Linux install is better.

    The most inportant thing is back up all your important data, and only do this if you genuinely want to leave windows. I’d make sure your windows license is digital before doing this too as that allows using windows again if you want to go back.

    I’d say anyone can use Linux, it’s user friendly and robust. In terms of installing Linux, I’d only do it if you are sure you know what you’re doing - installing any OS - including windows - can involved trouble shooting problems.


  • I’ve tried Arch - it allows you to make a system that is exactly what you want. So no bloat installing stuff you never need or use. It also gives you absolute control.

    On other distros like Fedora, you get a pre configured system set up for a wide range of users. You can reduce down the packages somewhat but you will often have core stuff installed that is more than you’ll need as it caters to everyone.

    Arch allows you to build it yourself, and only install exactly the things you actually want, and configure then exactly how you want.

    Also you learn an awful lot about Linux building your system in this way.

    I liked building an arch system in a virtual machine, but I don’t think I could commit to maintaining an arch install on my host. I’m happy to trade bloat for a “standard” experience that means I can get generic support. The more unique your system the more unique your problems can be I think. But I can see the appeal of arch - “I made this” is a powerful feeling.


  • I think the new device is good news. I can see what you’re saying - the benefit is if Steam Machines expand the PC games market with former console only players. But otherwise the threshold for PC development is already much lower than consoles; there are no dev kit fees, a wide choice of engines to target, relatively greater independence etc.

    The steam machine may help somewhat in having a specific hardware profile to target, but the games are still on steam’s store so still have to be able to run widely on Windows or Linux. That’s always been the complexity of PC development - the steam machine doesn’t change that much. Although admittedly the Steam Verified benchmarks are useful for users to simplify understanding what their kit can actually run which will benefit indie devs.


  • For me it seems to be when you go through to download the windows binary, you get an iframe on the page containing another site. That has ads and serves up the download. So I’m guessing the ads are on the website that provides videolan with hosting for its binaries?

    They are old fashioned intrusive ads pretending you need to click then to start your download. But the download starts already.



    • OS - - > Linux OpenSuSE with KDE

    • YouTube - - > Freetube - opensource, private YouTube client for Linux, MacOS and Windows

    • Downloading music/videos --> yt-dlp

    • Downloading videos/images --> gallery-dl

    • Email - - > Thunderbird (really moved forward in last few years)

    • Notes - - > Joplin

    Selfhosting (mine is on raspberry pi) :

    • Streaming library - Jellyfin

    • Photo library - imich

    • Downloads - qbittorrent, prowlaar, radaar, sonaar, lazy librarian in a docker stack with VPN

    • smart home - Homeassistant

    • filesync - - > Syncthing (I don’t have problems with long file names - maybe a Windows issue or Linux FS? I use EXT4 on all my devices and don’t use Windows anymore)


  • BananaTrifleViolin@lemmy.worldtoLinux@lemmy.mlTimeshift
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 month ago

    Looking at your error it’s because Rsync is erroring.

    I’d starr by testing Rsync with an individual text file saving to /dev/dm-0 and see what error is returned.

    Timeshift is good but it basically is just a tool to use Rsync to save a copy of your system folders (or other folders if you wish).

    Rsync needs to be able to read the source and write to the destination, so I’d start with testing that Rsync is able to do that.

    Given you’re using an encrypted partition it’s possible you’re trying to read/write to the wrong locations. You’ve provided device UUIDs but you’d probably actually need to be backing up the mounted decrypted locations? I.e. the root file system / will actually be a mounted location in your Linux set up, probably under /run, with symlinka pointing to it for all the different system folder. Similar for /home/ if you want to back up personal files.

    The device UUID would point to the filesystem containing the encrypted file (managed by LUKS) which will have very limited read/write permissions, rather than directly to the decryoted contents / or /home partitions as you’d expect in a normal system. In particular if /dev/dm-0 (looks to be an nvme drive) is an encrypted destination then really you also want to be pointing directly to it’s decrypted mounted location to write your files into, not the whole device.

    Edit: think of it like this, you don’t want to back up the encrypted container with Timeshift, you want to back up the decryoted contents (your filesystem) into amother location in your filesystem (encrypted or decrypted). If the destination is also an encrypted location you need to back up into its file system, not the device where the encrypted file sits. So use more specific filesystem paths not UUIDs. That would be something like /mnt/folder or /run/folder not /dev/anything as that’s hardware location, and not directly mounted in an encrypted filesystem unlike how it can be in a non-encryoted system.


  • Any points and click adventure game, there are loads including old classics and modern good games.

    Monkey Island remasters are fun and can be played with mouse. Broken Sword games are also good.

    Rusty Lake games are great if you prefer more puzzle games than narrative ones. Still has a great somewhat surreal plot just not like a point and click narrative game.

    Also If you havent played dwarf fortress now is the time to learn, the siege update came out this week. Mouse or keyboard, or both, but definitely can be done one handed.

    Vampire Survivor that others have suggested is a good shout, one hand on the keyboard is enough and its very addictive.


  • 100% CPU use doesnt make sense. RAM would be the main constraint not the CPU. Worth looking into - maybe a bug or broken piece of software.

    Also the DE may he more the issue than the distro itself. You could install an even more lightweight desktop environment like Open box. Also worth checking whether youre using x11 or Wayland. Its easy to imagine Wayland has not been optimised or extensively tested on something like your device, and could. Easily be a random bug if the DE is pushing your CPU to 100%

    There are super lightweight distros like Puppy linux.


  • In terms of KDE dependencies, you’re talking basically about QT. The amount of packages you download shouldnt be too much and likely used for other QT programs which are common.

    However there is also GSconnect which is a Gnome extension and uses the KDE connect protocol.

    I would say that your concerns regarding the KDE Connect dependencies should be balanced against the good Android and iOS support, and the wide use of KDE connect means it is well maintained, supported and responsive to security updates. These considerations may outweigh the installation of packages that you otherwise won’t be using? It may be better to go mainstream and accept the dependencies than hunt down a lesser supported alternative and deal woth the associated shortcomings.


  • Interesting question, I’d imagine that one major limit would be the number of cores your CPU has available. Once you got to more VMs than cores, I’d guess things would quickly grind to a halt?

    But I wonder if you could even anywhere near to that point as on searching only L2 VM is mentioned on various sites and that is with warnings of severe performance limitations and for development testing only. While L3 might work the problems may get too bad you can’t practically go beyond that level?


  • The key is getting out at the right time, and that is weighed massively against small investors. The big investors and institions control the market and can move quickly while small investors cannot.

    Tesla is not doing well - look at its falling sales. It’s a risky stock to hold. The AI companies are also highly risky stocks to hold.

    That doesn’t mean don’t hold them - all anyone is saying really is that these are high risk investments, and at some point they are going to probably crash because it’s a bubble.

    That doesn’t necessarily mean “don’t invest”. It does certainly mean be prepared to get out fast and also only use money you can afford to lose when investing with such high risk stocks.


  • It’s about short term vs long term costs, and AWS has priced itself to make it cheaper short term but a bit more expensive long term.

    Companies are more focused on the short term - even if something like AWS is more expensive long term, if it saves money in the short term that money can be used for something else.

    Also many companies don’t have the money upfront to build out their own infrastructure quickly in the short term, but can afford longer term gradual costs. The hope would be even though it’s more expensive, they reach a scale faster where they make bigger profits and it was worth the extra expense to AWS.

    This is how a lot of outsourcing works. And it’s exacerbated by many companies being very short term and stock price focused. Companies could invest in their own infrastructure for long term gain, but they often favour short term profit boosts and cost reduction to boost their share price or pay out to share holders.

    Companies frequently so things not in their long term interests for this reason. For example, companies that own their own land and buildings sell them off and rent them back. Short term it gives them a financial boost, long term it’s a permanent cost and loss of assets.

    In Signals case it’s less of a choice; it’s funded by donations and just doesn’t have the money to build out it’s own data centre network. Donations will support ongoing gradual and scaling costs, but it’s unlikely they’d ever get a huge tranch of cash to be able to build data centres world wide. They should still be using multiple providers and they should also look to buildup some Infrastructure of their own for resilience and lower long term costs.


  • It does make sense for Signal as this is a free app that does not make money from advertising. It makes money from donations.

    So every single message, every single user, is a cost without any ongoing revenue to pay for it. You’re right about the long run but you’d need the cash up front to build out that infrastructure in the short term.

    AWS is cheap in the sense that instead of an initial outlay for hardware, you largely only pay for actual use and can scale up and down easily as a result. The cost per user is probably going to be higher than if you were to completely self host long term, but that does then mean finding many millions to build and maintain data centres all around the world. Not attractive for an organisation living hand to mouth.

    However what does not make sense is being so reliant on AWS. Using other providers to add more resilience to the network would make sense.

    Unfortunately this comes back to the real issue - AWS is an example of a big tech company trying to dominate a market with cheap services now for a potential benefits of a long term monopoly and raised prices in the future. They have 30% market share and already an outage by Amazon is highly disruptive. Even at 30% we’re at the point of end users feeling locked in.