I’ve seen people talking about it and experienced it myself with a server, but why does Linux run so well on ARM (especially compared to Windows)?

  • h3ndrik@feddit.de
    link
    fedilink
    arrow-up
    123
    arrow-down
    3
    ·
    1 year ago

    I don’t think it’s just Linux. I’ve been told MacOS also works very well on ARM. Maybe it’s just Microsoft doing a bad job.

      • Bluefruit@lemmy.world
        link
        fedilink
        arrow-up
        59
        arrow-down
        4
        ·
        1 year ago

        I don’t believe it.

        You mean to tell me that Microsoft is doing a bad job with thier OS???

        Preposterous. These 100 or so processes that its running to track my every breath are incredibly important to make sure im given the best ad experience.

    • Semi-Hemi-Demigod@kbin.social
      link
      fedilink
      arrow-up
      38
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Mac OS was running on RISC processors back in the 90s, and Steve Jobs used them in his NeXT computers which used a variation of BSD, which was the basis for OS X which could run on PowerPC.

      Apple’s had a ton of experience with RISC so it makes sense they’d do it well.

      • .:\dGh/:.@lemmy.ml
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        It’s mainly due to PA Semi acquisition. These guys were the ones responsible of making excellent PowerPC processors, which were similar to what ARM has now.

        These guys are probably happier now that they have more resources, target devices and tightly coupled software.

      • LeFantome@programming.dev
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        NeXT computers were based on Motorola 680x0 processors that were actually CISC ( not RISC ). Steve Jobs did run MacOS on RISC in the 90s though as that is what PowerPC was.

        Modern Apple silicon is of course ARM64 so not the same architecture as PowerPC at all.

        • Bene7rddso@feddit.de
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          NeXTStep ran on multiple architectures, some oft them RISC. They did some work on a PPC build too

    • Xusontha@ls.buckodr.inkOP
      link
      fedilink
      arrow-up
      15
      arrow-down
      4
      ·
      edit-2
      1 year ago

      Well MacOS is because of a controlled ecosystem/hardware and a really good emulator, but IDK about Linux

      Also yes Windows on ARM is a steaming pile of garbage

      • h3ndrik@feddit.de
        link
        fedilink
        arrow-up
        13
        ·
        1 year ago

        Yeah. Linux is also optimized to run well. Has a capable community and a few good design choices. Many people use it to run it on servers so I wouldn’t be surprised if it performed well well on servers.

        Also there is a well known fork that is used on millions/billions(?) of ARM phones. So it’d better be a good choice for that use case.

      • pivot_root@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 year ago

        Microsoft absolutely could have made something comparable to Rosetta 2 for userspace if they were competent.

        Rosetta 2 isn’t an emulator, but a binary recompiler. It takes amd64 instructions, decodes them, and generates equivalent aarch64 instructions. The aarch64 instructions are then executed directly by the processor, performing the same tasks that the original binary would do on an Intel processor.

        It’s extremely difficult to do properly, but it’s nothing inherently special to MacOS or Apple’s ARM chips. ARMv8 has an attribute to enable strongly-ordered memory accesses, and it also supports native 4 KiB page sizes. Beyond those two solved concerns, there isn’t any actual hardware barrier preventing binary translation. Individual amd64 instructions can be translated into one or more equivalent aarach64 instructions, and complex instructions or those using large registers like those in AVX-512 can be shimmed and implemented in software. An offset table can be used to deal with indirect jumps, and direct jumps can just be rewritten in the generated code. And as Apple has proven, it’s even possible to support JIT-compiled code by intercepting jumps into executable pages and recompiling them before executing.

        It’s expensive in terms of time and engineering skills, but Microsoft had more than enough control over their own proprietary kernel to build something similar into Windows back when they first released it for ARM.

        • orangeboats@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          Rosetta certainly does emulate* x86. It can dynamically recompile x86 instructions to ARM instructions, otherwise applications that include an x86 JIT wouldn’t work at all on ARM Macs.

          * I know people will be pedantic about this… but other emulators (Dolphin, PCSX2 etc) have included a recompiler for ages and no one seemed to have a problem calling them emulators.

          • pivot_root@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            As far as people hypothetically being pedantic about it being an emulator, it does meet the dictionary definition of “hardware or software that permits programs written for one computer to be run on another computer.” Personally, I don’t see as one, though. It’s more like WINE with a binary translator slapped onto it.

            Dynamic recompilation is a part of modern emulators, but it’s only a tiny piece. Software like Dolphin or Yuzu don’t just provide a way to run non-native instruction sets, but provide a full environment mimicking the guest hardware. Things like low-level emulation of hardware components, high-level emulation (shims) of guest operating system APIs, and a virtual memory space for the programs to operate in.

            The only significant thing Rosetta does is recompile the instructions of the guest program. All the APIs and abstractions the amd64-compiled program uses are available natively. If I recall correctly there are shims for bridging between the calling convention of the host and the recompiled-amd64 functions, but they don’t do much more work than that.

            Another one of my reasons for not considering it to be an emulator is because it primarily goes for ahead-of-time cached recompilation. It definitely does JIT translation as you mentioned, but as a way to support amd64 JIT-generated code. In contrast, Dolphin and other emulators* rely on cached JIT recompiling or interpreting for everything related to executing the guest instructions.

            * Notable exceptions are Cxbx (Xbox -> Windows) and vita2hos (PS Vita -> Switch), which are emulators for platforms with compatible instructions sets. They work like WINE instead of JIT-recompiling or interpreting code, which is pretty cool.

      • Free Palestine 🇵🇸@sh.itjust.works
        link
        fedilink
        arrow-up
        6
        arrow-down
        3
        ·
        1 year ago

        Also yes Windows on ARM is a steaming pile of garbage

        Not just on ARM. Windows is and will always be a proprietary steaming pile of shit, no matter what architecture. That will be the case as long as Microsoft develops it.

    • thelastknowngod@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      I have an M1 mbp for work and its honestly unbelievable. It’s one of the nicest machines I’ve owned in years. The chip is a huge part of it.

  • Chobbes@lemmy.world
    link
    fedilink
    arrow-up
    69
    arrow-down
    1
    ·
    1 year ago

    I mean… On Linux you’re going to be running a bunch of open source applications that have been compiled for ARM specifically. A huge problem with Windows on ARM is going to be running legacy x86 / x86_64 applications. You’re probably not contending with this problem at all on Linux, and I suspect if you were you would be similarly unimpressed (you can get Linux to transparently execute executables for different platforms using binfmt_misc and qemu but it’s slooooooow).

    Honestly the better question might be why the Mac transition to Apple silicon has been so smooth. Part of this is that Apple cares a lot less about keeping legacy software working and companies will make native versions of their software ASAP. But Apple also has a good translation layer with Rosetta for this, and has custom silicon (which Microsoft does not) and I would not be surprised if part of this custom silicon involves extended instructions which make running x86 applications more feasible, but I don’t know the details and this is just speculation on my part.

      • kimpilled@infosec.pub
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 year ago

        Apple hit a sweet spot with this. x86_64 applications run at acceptable speed (making the transition easy for people who buy the hardware) while not being SO good that there’s zero reason for developers to start porting their software.

      • pivot_root@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        1 year ago

        Small correction: the flag setting modes aren’t undocumented. They’re standardized extensions. ARMv8.4 added FEAT_FlagM, and ARMv8.5 added FEAT_FlagM2.

        https://developer.arm.com/downloads/-/exploration-tools/feature-names-for-a-profile

        IIRC, the only nonstandard ARM extension used by Rosetta 2 in Apple’s processors is TSO, and that’s also implemented by other manufacturers. It’s also not a hard requirement to run amd64 under ARM. You can emulate it very slowly or restrict the application to a single core.

        Apologies for the tangent, but I needed to make sure nobody could defend Microsoft’s prior failings by saying “but Apple has secret hardware sauce”.

    • Hamartiogonic@sopuli.xyz
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      In 2011 Microsoft released Windows RT, and it was specifically designed to run on ARM hardware. Everyone hated it, and it never really became anything. Well, you can’t blame MS for not trying. Maybe the time just wasn’t right for that sort of radical transition. Everyone was complaining that you can’t even install anything other than the handful of applications available at the store.

    • Veraticus@lib.lgbt
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      This is overall very true but the transition even for Apple was anything but smooth. There was a long period of time during which app support for ARM was pretty hit or miss. Happily that period is just about over and now everything is built for all archs.

      • Chobbes@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        1 year ago

        I dunno, overall Rosetta 2 seems to be incredibly successful. It seems like most people were able to transition without worrying too much about whether their software would work at all or not, which I think is undoubtedly the smoothest an architecture transition like this has ever been.

      • HellAwaits@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I didn’t have much issues running x86 apps on M1, even apps that haven’t been updated before then.

  • rikonium@discuss.tchncs.de
    link
    fedilink
    arrow-up
    60
    arrow-down
    1
    ·
    1 year ago

    Windows’s achilles heel is arguably its chief benefit - legacy compatibility and being the de facto platform for applications.

    Back when I had a Surface RT, I thought it was awfully neat, ARM-compiled versions of Office, IE, Windows 8.x bits ran well and it was fanless with fine battery life. (although I surely sound weird, I had a Windows Phone back then too and the syncing with IE on both was a nice feature) It’s just they were pushing the Store then and if you jailbroke it, ARM applications were rare.

    Apple is a pro at architecture transitions and can steer the whole ship, MS can put Windows on ARM all they want but OEM’s will be reluctant since it’ll be a relatively big risk to sell a “Windows, buuut…” computer and the popular closed-source applications probably won’t bother with ARM for a while

    • OMGFloriduh@lemmy.world
      link
      fedilink
      arrow-up
      40
      ·
      1 year ago

      Apple is a “pro” because it is a forced-migration, the eventual upgrade path is forced so the vendors have to follow if they want to support Mac. This is the reason there is vendor adoption on Mac, and not on Windows. I think until ARM has a significant market share on Windows the vendors will not port their software.

      • pivot_root@lemmy.world
        link
        fedilink
        arrow-up
        22
        ·
        1 year ago

        I know the cool thing is to hate on Apple for being Apple, but they have actually done a solid job with their transitions between instruction set architectures.

        Ultimately, software developers and end users are forced to use the newer architecture, yes. But credit where credit is due, Apple chose to take the path of providing both hardware and software level facilities to make the transitions as seamless as possible over years-long timespans. They could have simply refused to support older architectures to force a migration down our throats, and indoctrinated fanboys would have opened their wallets anyway.

        What they actually did was create compatibility layers, a multi-architecture executable file format, and binary translation frameworks built into the operating system. I fucking despise them for creating a walled-garden ecosystem and cult around their products, but I genuinely have to respect them for Rosetta and Rosetta 2. Developing a binary translation layer is a whole different beast than high-level emulation, and it’s significantly more difficult to pull off correctly. The fact that they managed to do it twice, and do it damn-near seamlessly is impressive. Rosetta 2 even supports translating just-in-time compiled code, which is a huge pain point for that kind of thing.

      • tunetardis@lemmy.ca
        link
        fedilink
        arrow-up
        10
        ·
        1 year ago

        If you follow the history of the Mac, it went through a number of major architecture transitions from 680x0 -> PowerPC -> Intel -> ARM. Each time, Apple supplied a decent emulator to support applications during the transition.

        From a developer perspective, these were huge upheavals that came with a lot of drama but also offered some opportunities. The latter came from the fact that the bar was in some way set higher on the new platform and you could count on any code you compiled for it supporting certain base features. Every PowerPC, for example, had hardware floating-point. Before that, some CPUs did, some didn’t. The Intel transition happened at the time when dual core had become standard and SIMD had become serviceable (with SSE2). The ARM transition has set the bar at 64-bit architecture for every CPU (since Apple had earlier dumped 32-bit on the iPhone side).

        Windows/Intel has developed in a more evolutionary than revolutionary manner, which is easy to see if you look, for example, at all the legacy cruft in the Intel ISA. It’s a sad sight. Supporting all that makes instruction decoding a nightmare. In theory, Intel/AMD could reinvent a new sleeker ISA if they could get Microsoft to commit to supplying a performant emulator for the old one? But I’m not holding my breath.

    • LeFantome@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Apple can be much more heavy handed than Apple can. Apple controls their hardware ecosystem. They make everybody buy the architecture they want. They make the OS stop working on older hardware. They set minimum OS requirements for application writers. So, all the software ( from everybody - not just Apple ) gets moved to the new architecture quickly. It does not take long before being on the new architecture is all that makes sense.

      Windows on the other hand does not control the hardware. They are trying. They make their own now so they can at least seed the new ecosystem. However most Windows users buy their Windows hardware from somebody other than Microsoft. It makes sense for most hardware to target the larger application audience and that will be the old architecture. It makes sense for application devs to target the older architecture. For a long, long time, the older arch makes the most sense for almost everybody in the ecosystem. Only early adopters make the switch ( both users and application sellers ). In practice, that means that moving to the new arch means not having native apps in many cases which means the new arch will, in practice, be worse than the old one.

  • nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    58
    ·
    1 year ago

    Linux, and much of the open-source software that goes with it, has been multi-architecture for a long time. If you take something that already runs pretty decently on x86, x86_64, PA-RISC, Motorola 68000, PowerPC, MIPS, SPARC, and Intel Itanium CPUs, porting it to yet another architecture is, while not trivial, at least mostly a known problem.

    Windows, by contrast, was built for descendants of the Intel 8088, period. It’s unsurprising that porting it is a hard problem and that results aren’t always satisfactory.

    (Apple built on top of a modified BSD kernel, and BSD has also been ported around quite a bit, so they also have a ports-are-a-known-problem advantage.)

    • unfnknblvbl@beehaw.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Windows, by contrast, was built for descendants of the Intel 8088, period.

      This is not quite true. Windows NT was built to support multiple architectures from the start.

      • apt_install_coffee@lemmy.ml
        link
        fedilink
        arrow-up
        18
        ·
        1 year ago

        NT is not the majority of windows code though; for windows to be multi architecture, all of windows needs to work with the new architecture; NT, drivers & userspace.

        For Linux, if an existing userspace application doesn’t work in aarch64, somebody somewhere will build a port. For windows, so much of their stuff is proprietary that Microsoft are the only ones able to build that port.

        Not because “windows bad”, just a consequence of such a locked down system which doesn’t have anything open source to inherit.

      • Billegh@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Yes, and DEC Alpha too. But all of that ceased with windows 2000. The only porting since then was from x86 32bit to 64bit. I’m willing to bet money I don’t have that Microsoft really only expected a port to 128bit x86 until ARM started gaining steam.

    • kristoff@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Hi, Perhaps a stupid question, but what exactly is required to port an OS to a different architecture? OK, there is the boot-process, and low-level language compilers, … but what else?

      How much code has actually to be rewriten, and how much just needs “make” to be recompiled?

      Kr.

      • nyan@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Not my area, but since OSs are really low-level (obviously), they can be affected by details of the host architecture that we don’t often think about. Endianness, for instance.

        I opened up the source package for the kernel I’m currently running (6.1.42) and looked at it. The smallest set of architecture-specific code is the ~2MB for sh (I assume that’s SuperH, a 32-bit RISC architecture from the early 1990s). 32-bit ARM takes up 27MB, although if you check the individual files, a fair amount of that is device trees and the like. So we’re talking about less than 50MB of arch-specific source code for most platforms, and probably less than 10 in many cases, but it depends on the design of the architecture and how many times it’s been extended.

        Looking at individual file names, topics addressed in the kernel’s arch-specific code files appear to include booting, low-level memory access, how to idle the CPU, crypto primitives, interrupts, suspending/hibernating the system and other power management, virtualization facilities if the CPU provides them, crash dumps and stack traces, and, yes, endianness.

        You may also need additional drivers for odd bits of hardware not used by other systems. Or not, but it’s a common sticking point with ARM SOCs and other small-format machines.

        That’s just the kernel. You’ll also need to establish a working cross-compiler before you can get your kernel onto the system. At that point, you can probably bootstrap much of the rest by running make and get to a working command-line system (GUI is going to be more of a crapshoot, requiring additional work on video acceleration and such in order to run well). And there may be odd warts in other pieces of software, each requiring a few lines of code that add up over time.

    • DigitalMuffin@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      No. Windows has portable architecture and it’s quite simple for Microsoft to compile it for whatever processor they want. Just change HAL and you’re ready to go.

      • conciselyverbose@kbin.social
        link
        fedilink
        arrow-up
        35
        ·
        1 year ago

        Because it’s open source and most of the applications for it are open source. That means you can compile it and the applications specifically for the hardware you have.

        Windows does kind of support ARM on its specific hardware, but it can’t be adjusted for other hardware and they have to translate most applications to work. Apple has done much of that work for their hardware to work well, as well as very good translation for x86, and because they leaned hard into the transition, developers were mostly forced to compile for ARM going forward. Microsoft hasn’t done the same, and ARM is a tiny target, so it doesn’t happen with any regularity there.

      • bamboo@lemm.ee
        link
        fedilink
        arrow-up
        20
        ·
        1 year ago

        Because people have been doing so for a long time and have ironed out most of the quirks. The software is also generally quite simple, meaning there are just fewer quirks that need to be ironed out. And the ecosystem is largely open source, meaning everything can be recompiled to target the relevant architecture, so while translation layers are still useful, they’re not the essential tool they are in proprietary ecosystems. The main headaches that plague windows on arm mostly just don’t exist on the Linux side.

      • Free Palestine 🇵🇸@sh.itjust.works
        link
        fedilink
        arrow-up
        17
        arrow-down
        2
        ·
        1 year ago

        Because it’s not developed by some corporate fuckers whose only goal it is to make as much money as possible, it’s developed by individual skilled people in their free time, because they’re passionate. They don’t want to sell some garbage, they genuinely want to make a good operating system for themselves and everyone else to freely use without any restrictions. FOSS is not about the money, it’s about actually creating something good.

          • PuppyOSAndCoffee@lemmy.ml
            link
            fedilink
            arrow-up
            6
            ·
            1 year ago

            Honestly MSFT does a pretty good job at getting things to work and keeping them working.

            You can open a windows laptop and trust it will start. You can close it, and trust it will sleep. You can open it and … there it is, as it was!

            You even have sound and networking. And that widget some contractor built 20 years ago in vb.whatever? It still runs OOTB.

            Yes, they have some backwards shit because of this but for a lot of people, these ticks are the high watermark of computation.

            Not saying I agree it should be that way, or that any should be satisfied. Just be aware these are things that Microsoft excels at … and Linux is still getting there.

            But:

            VBA why are you still a thing? WHY? Why is MS Access still not treated as the virus that it really is?

            Why does InfoPath still exist?

            Many grievances.

          • 4am@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            It’s so you’ll have to buy the other one when you discover this

      • gens@programming.dev
        link
        fedilink
        arrow-up
        11
        ·
        1 year ago

        Because you can try compile it on arm, and if something doesn’t work you can report it or fix it yourself. That said windows worked fine on arm years ago. Many gps, medical, and such devices used to use windows ce on arm, mips. (Windows phone too, arm)

      • PuppyOSAndCoffee@lemmy.ml
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        ARM the company as well as industry partners contribute code & resources to the linux kernel…so that would be one reason why linux on ARM runs well.

        Unsure how we are tracking Microsoft ARM as worse than Linux arm, what benchmarks did we see?

  • AutumnSpark1226@lemmy.today
    link
    fedilink
    arrow-up
    20
    ·
    1 year ago

    Many ARM boards (including raspberry pis) are designed to run with linux. So the manufacturers can customize the kernel and firmware for the boards.

  • Bob Smith@sopuli.xyz
    link
    fedilink
    arrow-up
    14
    ·
    edit-2
    1 year ago

    I’ve run Linux on a Rockchip Chromebook, several Pi boards, and an M1 Macbook Pro, all with good results. I think that it helps that Linux comes from a long lineage of highly portable operating systems. One of the early victories of Unix was its ease of portability to new types of processor, due (at least in part) to being programmed in C. The BSDs and Linux have always had developers who took joy in getting the operating system up and running on more than one type of architecture. Debian, for instance, has run on one sort of ARM chip or another since around 2000. Windows has a core business that thrives on X86-based chip designs and they have had very little pressure to branch out over the years. Computer companies build around their operating system, rather than the other way around.

    • DM294@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Even I am interested in running linux in M1 macbook pro. Which distro have you used for that?

      • Bob Smith@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Ooops! I meant to type ‘Macbook Air’. I’ll leave the goof up to give your comment context, but I don’t have a MBP these days. I used the initial Asahi release and I’ve been upgrading it in place for a year or so.

    • jabjoe@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      DeviceTree is a massive improvement compared to no discoverablity AND no DeviceTree. Each device was a custom kernel build, duplicate drivers and other code. It was madness. Linus lost his shit with the ARM kernel devs saying it has to be sorted. DeviceTree was the solution.

      In the end, ARM will have discoverablity. Buses like I2C and SPI will have some standard to discover what the hell is on them. But today it’s chaos and DeviceTree is the only source or order.

      • Bene7rddso@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        You still need a way to know which memory addresses your I²C or SPI interface has. While DeviceTree is better than nothing, it’s way worse than the “It just works” of ACPI

        • jabjoe@feddit.uk
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Completely agree that discoverablity is better. But that needs hardware. It’s crazy this isn’t solved. It means ARM devices are basically made to be e-waste. Make phones and other ARM devices like PCs. Google could mandate that hardware must be discoverable and be able to run a generic stock, to carry the Android brand.

    • jollyrogue@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Since the OP specified server hardware, probably not. RH said RHEL wasn’t going to support anything which didn’t use UEFI to boot, and Arm specified UEFI in their ServerReady hardware certification.

    • BCsven@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      Raspberry pi works fine with linux, cutiePi also. my iomega nas is an arm board running debian…i don’t see the issue

        • BCsven@lemmy.ca
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          The issue with Arm is they aren’t all one board/chip, you have ARM based design licenced from them and they are built to meet the criteria of what the customer requires. i.e. for my iomega NAS there isn’t firmware boot, you just have to generate an empty section of 00s on the first 32bytes of the drive so the board knows that is the drive to load the kernel from (no grub no uboot) and the board is set to do the rest from the next partition.

            • BCsven@lemmy.ca
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              But all x86 instructions are the same right, thus why it doesnt matter what era your chip is from or what manufacturer, arm can be very different

            • Bene7rddso@feddit.de
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              Booting isn’t the only problem with ARM. Instead of saving information about builtin devices on the board and exposing it via ACPI, board manufacturers create a devicetree and ship it with the kernel. This means that if you want to run your own kernel you need to build your own devicetree

      • pewpew@feddit.it
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Theese devices support linux out of the box. Try installing a proper linux distro on your phone and good luck finding a graphics driver that is not sotware rendering

        • BCsven@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          As did microsofts arm device support windows, which is the point zi was replying to with skullgiver. of course windows works if the arm version was built for the hardware it runs on.

  • UFODivebomb@programming.dev
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    1 year ago

    Windows is LLP64 which is dumb while Linux is LP64.

    Ok. That only impacts C/C++ porting but still it’s a silly choice by windows.

  • Knusper@feddit.de
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Linux has a low footprint, similar to ARM, so the two were naturally combined for low footprint platforms like Android and Raspberry Pis.

    The open-source ecosystem also helped. If proprietary software is compiled only for x86, then the best you can do, is to try to run them with a translation layer.
    With open-source, you can compile them for ARM yourself. No guarantees that that will just work, but devs can contribute fixes and eventually the original software package can be officially released with an ARM package.

    • Xusontha@ls.buckodr.inkOP
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      Just in general, Linux on ARM more often than not just… works. Compared to Windows on ARM, that’s an anomaly (yes I know part of the reason is Microsoft is just bad at making it, but there’s got to be more to the Linux side for it to be that good)

      • Muddybulldog@mylemmy.win
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 year ago

        A key factor is LINUX has been available for ARM since nearly “the beginning”. Unlike Windows, which was basically Intel only for well over a decade, LINUX has had strong support for multiple architectures throughout its lifecycle. As a result, software that grew up within that ecosystem tended to be more agnostic in design which helps porting efforts.