

You scoff but this is already being done in China. They desolder good chips from bad cards and add them to a mule card.
Cleveland Rapid Response could use donations if you can.
https://rhizomehouse.org/mutualaid/ is a good list of people you can help.
Shit is going down in Minnesota but they’ve never been more organized than they are now. Only we keep us safe.
If you want to help you can donate to a few groups.
https://www.gofundme.com/f/help-equip-twin-cities-legal-observers-with-ppe
https://nlgmn.org/mass-defense/
https://www.wfmn.org/funds/immigrant-rapid-response/
More comprehensive list here https://www.standwithminnesota.com/
How to organize a rapid response from a very high level with further detailed resources. https://southerncoalition.org/resources/rapid-response-101/
Good general advice on organizing, also a good resource to find groups near you that are likely aligned. https://www.fiftyfifty.one/organizer-resources
Feel free to reach out for any other resources.


You scoff but this is already being done in China. They desolder good chips from bad cards and add them to a mule card.


Almost like an LLM wrote it…


I mean what you’re proposing was the initial push of gpt3. All the experts said, these GPTs will only hallucinate more with more resources and they’ll never do anything more than repeat their training data as a word salad posing as novelty. And on a very macro scale, they were correct.
The scaling problem
https://arxiv.org/abs/2001.08361
The scaling hype
https://gwern.net/scaling-hypothesis
Ultimately, hype won out.


will never achieve AGI or anything like it
On this we absolutely agree. I’m targeting a more efficient interactive wiki essentially. Something you could package and have it run on local consumer hardware. Similar to this https://codeberg.org/BobbyLLM/llama-conductor but it would be fully transform native and there would only need to be one LLM for interaction with the end user. Everything else would be done in machine code behind the scenes.
I was unclear I guess, I was talking about injecting other models, running their prediction pipeline for the specific topic, and then dropped out of the window to be replaced by another expert. This functionality handled by a larger model that is running the context window. Not nested models, but interchangeable ones dependent on the vector of the tokens. So a qwq RAG trained on python talking to a qwen3 quant4 RAG trained on bash wrapped in deepseekR1 as the natural language output to answer the prompt “How do I best package a python app with uv on a linux server to run a backend for a …”
Currently this type of workflow is often handled with MCP servers from some sort of harness and as I understand it those still use natural language as they are all separate models. But my proposal leverages the stagnation in the field and leverages it as interoperability.


Ah I see, however you do bring up another point. I really think we need a true collection of experts able to communicate without the need for natural language and then a “translation” layer to output natural language or images to the user. The larger parameters would allow the injection of experts into the pipeline.
Thanks for the clarification, and also for the idea. I think one thing we can all agree on is that the field is expanding faster than any billionaire or company understands.


Unlocking the bootloader makes your phone more vulnerable if you do not lock the bootloader after you install a different OS.
https://community.e.foundation/t/locking-the-bootloader-after-installation/68474
https://xdaforums.com/t/guide-to-lock-bootloader-while-using-rooted-grapheneos-magisk-root.4510295/
Also root does increase your attack surface, however it enables some convenience for security profiles so make sure you have a usecase for it.
https://ssd.eff.org/ is a good collection of privacy best practices.


Sure, but giant context models are still more prone to hallucination and reinforcing confidence loops where they keep spitting out the same wrong result a different way.


Fundamentally no, linear progress requires exponential resources. The below article is about AGI but transformer based models will not benefit from just more grunt. We’re at the software stage of the problem now. But that doesn’t sign fat checks, so the big companies are incentivized to print money by developing more hardware.
https://timdettmers.com/2025/12/10/why-agi-will-not-happen/
Also the industry is running out of training data
https://arxiv.org/html/2602.21462v1
What we need are more efficient models, and better harnessing. Or a different approach, reinforced learning applied to RNNs that use transformers has been showing promise.
ZFS isn’t a backup, but it is a gateway drug
I see you’ve never worked in corporate IT


Yes, that’s why we have https://github.com/colonelpanichacks/flock-you


subscribed


Amutable are a gaggle of fucks



Might I humbly suggest a banner?


Great, now I have to watch classic Fry QI for the next 3 hours.


Hahahaha you sound like a technical writer I know.


Thank you!


Huh, according to this https://en.wikipedia.org/wiki/Straw_man we’re both making strawman arguments.


But then again, I did get fired from Starbucks as a teen.
You can still be competent at a shit job. But yes, if you pay shit pay you get what you paid for. We absolutely agree that it’s not the person behind the counter’s fault. They’re trying to get through a shift, not craft the perfect coffee order to your exacting standards. They can’t even control the grind or pull or steam temp on the new machines. You’re better off going to a fucking vending machine.
Did you document the setup? I’m interested in hosting this.