• 0 Posts
  • 54 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle





  • Not the person you replied to, but I’m in agreement with them. I did tech hiring for some years for junior roles, and it was quite common to see applicants with a complete alphabet soup of certifications. More often than not, these cert-heavy applicants would show a complete lack of ability to apply that knowledge. For example they might have a network cert of some kind, yet were unable to competently answer a basic hypothetical like “what steps would you take to diagnose a network connection issue?” I suspect a lot of these applicants crammed for their many certifications, memorized known answers to typical questions, but never actually made any effort to put the knowledge to work. There’s nothing inherently wrong with certifications, but from past experience I’m always wary when I see a CV that’s heavy on certs but light on experience (which could be work experience or school or personal projects).


  • That’s just what happens to CEOs of publicly traded companies when they have a bad year. And Intel had a really bad year in 2024. I’m certainly hoping that their GPUs become serious competition for AMD and Nvidia, because consumers win when there’s robust competition. I don’t think Pat’s ousting had anything to do with GPUs though. The vast majority of Intel’s revenue comes from CPU sales and the news there was mostly bad in 2024. The Arrow Lake launch was mostly a flop, there were all sorts of revelations about overvolting and corrosion issues in Raptor Lake (13th and 14th gen Intel Core) CPUs, broadly speaking Intel is getting spanked by AMD in the enthusiast market and AMD has also just recently taken the lead in datacenter CPU sales. Intel maintains a strong lead in corporate desktop and laptop sales, but the overall trend for their CPU business is quite negative.

    One of Intel’s historical strength was their vertical integration, they designed and manufactured the CPUs. However Intel lost the tech lead to TSMC quite a while ago. One of Pat’s big early announcements was “IDM 2.0” (“Integrated Device Manufacturing 2.0”), which was supposed to address those problems and beef up Intel’s ability to keep pace with TSMC. It suffered a lot of delays, and Intel had to outsource all Arrow Lake manufacturing to TSMC in an effort to keep pace with AMD. I’d argue that’s the main reason Pat got turfed. He took a big swing to get Intel’s integrated design and manufacturing strategy back on track, and for the most part did not succeed.


  • Being a private company has allowed Valve to take some really big swings. Steam Deck is paying off handsomely, but it came after the relative failure of the Steam Controller, Steam Link and Steam Machines. With their software business stable, they can allow themselves to take big risks on the hardware side, learn what does and doesn’t work, then try again. At a publically traded company, CEO Gabe Newell probably gets forced out long before they get to the Steam Deck.


  • However, it’s worth mentioning that WireGuard is UDP only.

    That’s a very good point, which I completely overlooked.

    If you want something that “just works” under all conditions, then you’re looking at OpenVPN. Bonus, if you want to marginally improve the chance that everything just works, even in the most restrictive places (like hotel wifi), have your VPN used port 443 for TCP and 53 for UDP. These are the most heavily used ports for web and DNS. Meaning you VPN traffic will just “blend in” with normal internet noise (disclaimer: yes, deep packet inspection exists, but rustic hotel wifi’s aren’t going to be using it ;)

    Also good advice. In my case the VPN runs on my home server, there are no UDP restrictions of any kind on my home network and WireGuard is great in that scenario. For a mobile VPN solution where the network is not under your control and could be locked down in any number of ways, you’re definitely right that OpenVPN will be much more reliable when configured as you suggest.


  • I use WireGuard personally. OpenVPN has been around a long time, and is very configurable. That can be a benefit if you need some specific configuration, but it can also mean more opportunities to configure your connection in a less-secure way (e.g. selecting on older, less strong encryption algorithm). WireGuard is much newer and supports fewer options. For example it only does one encryption algorithm, but it’s one of the latest and most secure. WireGuard also tends to have faster transfer speeds, I believe because many of OpenVPN’s design choices were made long ago. Those design choices made sense for the processors available at the time, but simply aren’t as performant on modern multi core CPUs. WireGuard’s more recent design does a better job of taking advantage of modern processors so it tends to win speed benchmarks by a significant margin. That’s the primary reason I went with WireGuard.

    In terms of vulnerabilities, it’s tough to say which is better. OpenVPN has the longer track record of course, but its code base is an order of magnitude larger than WireGuard’s. More eyes have been looking at OpenVPN’s code for more time, but there’s more than 10x more OpenVPN code to look at. My personal feeling is that a leaner codebase is generally better for security, simply because there’s fewer lines of code in which vulnerabilities can lurk.

    If you do opt for OpenVPN, I believe UDP is generally better for performance. TCP support is mainly there for scenarios where UDP is blocked, or on dodgy connections where TCP’s more proactive handling of dropped packets can reduce the time before a lost packet gets retransmitted.




  • I think you’re referring to FlareSolverr. If so, I’m not aware of a direct replacement.

    Main issue is it’s heavy on resources (I have an rpi4b)

    FlareSolverr does add some memory overhead, but otherwise it’s fairly lightweight. On my system FlareSolverr has been up for 8 days and is using ~300MB:

    NAME           CPU %     MEM USAGE
    flaresolverr   0.01%     310.3MiB
    

    Note that any CPU usage introduced by FlareSolverr is unavoidable because that’s how CloudFlare protection works. CloudFlare creates a workload in the client browser that should be trivial if you’re making a single request, but brings your system to a crawl if you’re trying to send many requests, e.g. DDOSing or scraping. You need to execute that browser-based work somewhere to get past those CloudFlare checks.

    If hosting the FlareSolverr container on your rpi4b would put it under memory or CPU pressure, you could run the docker container on a different system. When setting up Flaresolverr in Prowlarr you create an indexer proxy with a tag. Any indexer with that tag sends their requests through the proxy instead of sending them directly to the tracker site. When Flaresolverr is running in a local Docker container the address for the proxy is localhost, e.g.:

    If you run Flaresolverr’s Docker container on another system that’s accessible to your rpi4b, you could create an indexer proxy whose Host is “http://<other_system_IP>:8191”. Keep security in mind when doing this, if you’ve got a VPN connection on your rpi4b with split tunneling enabled (i.e. connections to local network resources are allowed when the tunnel is up) then this setup would allow requests to these indexers to escape the VPN tunnel.

    On a side note, I’d strongly recommend trying out a Docker-based setup. Aside from Flaresolverr, I ran my servarr setup without containers for years and that was fine, but moving over to Docker made the configuration a lot easier. Before Docker I had a complex set of firewall rules to allow traffic to my local network and my VPN server, but drop any other traffic that wasn’t using the VPN tunnel. All the firewall complexity has now been replaced with a gluetun container, which is much easier to manage and probably more secure. You don’t have to switch to Docker-based all in go, you can run hybrid if need be.

    If you really don’t want to use Docker then you could attempt to install from source on the rpi4b. Be advised that you’re absolutely going offroad if you do this as it’s not officially supported by the FlareSolverr devs. It requires install an ARM-based Chromium browser, then setting some environment variables so that FlareSolverr uses that browser instead of trying to download its own. Exact steps are documented in this GitHub comment. I haven’t tested these steps, so YMMV. Honestly, I think this is a bad idea because the full browser will almost certainly require more memory. The browser included in the FlareSolverr container is stripped down to the bare minimum required to pass the CloudFlare checks.

    If you’re just strongly opposed to Docker for whatever reason then I think your best bet would be to combine the two approaches above. Host the FlareSolverr proxy on an x86-based system so you can install from source using the officially supported steps.


  • Anything that pushes the CPUs significantly can cause instability in affected parts. I think there are at least two separate issues Intel is facing:

    • Voltage irregularities causing instability. These could potentially be fixed by the microcode update Intel will be shipping in mid-August.
    • Oxidation of CPU vias. This issue cannot be fixed by any update, any affected part has corrosion inside the CPU die and only replacement would resolve the issue.

    Intel’s messaging around this problem has been very slanted towards talking as little as possible about the oxidation issue. Their initial Intel community post was very carefully worded to make it sound like voltage irregularity was the root cause, but careful reading of their statement reveals that it could be interpreted as only saying that instability is a root cause. They buried the admission that there is an oxidation issue in a Reddit comment, of all things. All they’ve said about oxidation is that the issue was resolved at the chip fab some time in 2023, and they’ve claimed it only affected 13th gen parts. There’s no word on which parts number, date ranges, processor code ranges etc. are affected. It seems pretty clear that they wanted the press talking about the microcode update and not the chips that will have the be RMA’d.


  • Having read all of them, I think of these books as three different sets:

    • Books 1-6 of the main series basically cover the same time period as the TV show. If you enjoyed the first two books, it’s extremely likely that you will enjoy books 3-6. The primary story arc started in book 1 comes to a very satisfying conclusion in book 6, broadly the same as it did in the show.
    • Books 7-9 are more like a sequel series than a direct continuation of book 1-6. The primary characters return but it’s really a new story arc. Personally I read book 7 at release, then later bounced off book 8 when it came out however many months later. It was only when I came back to reread the entire series that books 7-9 clicked for me. For my money everything came to a satisfying conclusion in book 9, with answers to most of the bigger mysteries behind the entire series (i.e. who built the rings, how did they build them, who killed the ring builders, etc.).
    • The novellas and short stories focus on backstories and side characters. I particularly liked that they reveal where certain side characters eventually ended up; not naming any names for spoiler reasons. Memory’s Legion collects all of these into a single book-length collection, which is probably the best way to get them.

    TL;DR book 1-6 for sure for sure, books 7-9 probably, novellas if you go through books 1-9 and still want more.


  • OK, I can do that. For the record I think the books are pretty great, though I do admit they stretch the bounds of believability at times.

    Major spoilers, lore dump
    Are you sure???

    OK then, here’s the details.

    What’s the deal with their technology?

    Technology in the silos is kept deliberately primitive for a number of reasons. First, simpler tech is easier to maintain and repair. While the silo inhabitants can manufacture many things, only so many CPUs, monitors, hard drives etc. were placed inside each silo. Second, simpler tech makes the silos easier to control. I don’t remember if this is mentioned in the show, but in the books they mentions that porters carry paper notes up and down the stairs because computer messages are expensive. There’s no reason for them to be expensive, except that the powers in control of silos don’t want its inhabitants to be able to effectively coordinate and organize resistance across the levels (this is also part of why the silo has no elevator).

    The inhabitants are given enough tools and knowledge to build simple things and maintain mechanical devices, but anything involving high magnification is outlawed because if someone looks too closely out how the electronics work then they can start figuring out things they must not know. For example, all the radios in each silo were placed there when they were constructed and were tuned to communicate only within that silo. If someone breaks down a radio and figures out how it works then they might be able to retune it and pick up broadcasts from other silos. Much like the builders didn’t want inhabitants coordinating between levels, they definitely don’t want them to even be aware of the other silos, much less start coordinating with them.

    Why are the restrictive and nonsensical rules in place?

    Again, control. In order to keep the populace confined and healthy, there have to be strict rules on who can procreate, who needs to do what job, and above all that no one can simply open the doors and let death inside. Humans aren’t inclined to thrive under such conditions, which tends to lead to uprisings that have occurred multiple times in the history of each silo. The rules, the cleanings and the memory-wipe drugs are all part of an effort to keep the populace contained and safe.

    I can answer your other questions are well, but this further lore doesn't get revealed until much later in the books. Be sure you want full lore spoilers before you click.

    What was the ecological disaster?

    Self-inflicted genocidal nanobots. Read further to understand why “self-inflicted.”

    Who built the silos?

    The silo project was the brainchild of a US Senator. Through extensive political horse-trading, leverage, dirty tactics, you name it, he was able to secure funding for the silos and oversee their construction. There are 50 or 51 silos in total, outside Atlanta. The cover story of their construction was that they were to provide deep underground storage for nuclear waste. In actual fact they were long-duration isolated habitats to preserve humanity from the fallout of a nanotech war. The initial population of each silo came from a big ribbon-cutting ceremony / political rally / Democratic convention, where reps from each state were in the area around each silo. Atlanta was nuked to provide a reason to get everyone underground, at which point each silo was sealed.

    This is the other reason magnification is verbotten in the silos. If the inhabitants got really good at magnification then they might find the killer nanobots outside their door, and then there would be some very difficult questions with no good answers.

    The Senator’s thinking went like this:

    • Nanotechnology is reaching a point where someone could use it as a weapon to kill entire populations based on genetic markers.
    • Such a weapon would be almost impossible to stop.
    • It is inevitable that someone somewhere will attempt to use such a weapon to wipe out their rivals. No nuclear fallout, no lingering poisons, no destruction of infrastructure, just whole countries depopulated and free for the taking.
    • If the weapon will inevitably be constructed and cannot be stopped then we must build it and use it first, before anyone else does.

    Nanobots were actually released by the silos themselves after they were first sealed, as well as being released worldwide to kill everyone not in the silos. Whenever the silo doors are opened, additional nanobots are released to keep the area around the silo uninhabitable so that the inhabitants are strongly motivated to stay inside. There’s actually one silo not like the others, Silo 1. This silo’s inhabitants work in six-month shifts, monitoring the other silos and going into cryosleep between shifts. Silo 1 works with the heads of IT of each other silo, reading them in on part of the history so those IT heads understand the stakes. Of course, the heads of IT are not told that the inhabitants of Silo 1 deliberately caused the disaster in the first place.

    Every silo is rigged to blow so that if it looks like its inhabitants have completely escaped control, Silo 1 can remotely detonate and pancake every floor in a silo down to the bottom of its pit. Silo 1 also has bomber drones as a backup, in the event that the inhabitants find and disable the remote detonation capabilities. This is why the head of IT is so frantic to prevent an uprising. He knows that he has to maintain order at all costs, or Silo 1 can literally pull the plug on their silo and all its inhabitants.

    So why the head of IT? Because the other part of the plan is the servers in each silo. They maintain records for every silo inhabitant, and Silo 1 has backdoor access to that data. The silo project is also an attempt to prevent a repeat of a nanotech war, by reducing humanity to a homogenous and unified population. At some future date when each silo’s supplies are running out, one lucky silo gets told where to find the digging machine at the bottom of their silo. The chosen silo would be the one with the most cohesive population and the best chance of long-term survival, according to computer models and simulations. All the other silos are to be destroyed.


  • I briefly experimented with it ages ago. And I mean ages ago, like 20+ years ago. Maybe it’s changed somewhat since then, but my understanding is that Gentoo doesn’t provide binary packages. Everything gets compiled from source using exactly the options you want and compiled exactly for your hardware. That’s great and all but it has two big downsides:

    • Most users don’t need or even want to specify every compile option. The number of compile options to wade through for some packages (e.g. the kernel) is incredibly long, and many won’t be applicable to your particular setup.
    • The benefits of compiling specifically for your system are likely questionable, and the amount of time it takes to compile can be long depending on your hardware. Bear in mind I was compiling on a Pentium 2 at the time, so this may be a lot less relevant to modern systems. I think it took me something like 12 hours to do the first-time compile when I installed Gentoo, and then some mistake I made in the configuration made me want to reinstall and I just wasn’t willing to sit through that again.

  • Several years ago I was getting a lot of acid reflux. Went to the doctor, he gave me the “no-fun diet” list with all the foods to avoid because they can cause indigestion. Everything I loved was on that list. Beer. Cheese. Fried foods. Hot peppers. And, of course, coffee. I was highly motivated to achieve some kind of resolution to these stomach problems so I gave up everything on the list except coffee. Lo and behold, the symptoms remained. I switched the roles and gave up only coffee. The stomach symptoms disappeared, to be replaced by the worst fatigue headaches I’ve ever encountered. It took two weeks for the headaches to finally fade, and now I’m a tea drinker for life.

    I drink Earl Grey tea, mostly because I’m forgetful as hell and I need a tea where I can just leave the tea bag in there for as long as it takes me to remember that I made tea. With most other black teas if you don’t yank the bag out at the right time your tea will get bitter as hell. Not Earl Grey, you can forget that shit for half an hour and the Earl don’t mind. You’ll still come back to a cup of tea that’s still perfectly drinkable. When I want to take it to the next level I get some Cream of Earl Grey, the kind with the little blue flower petals in it. Heavenly.