

Child stars man. Never had a chance.


Child stars man. Never had a chance.


I’m mostly trying to describe a feeling I don’t hear named very often


I’ll give that a shot.
I’m running it in docker because it’s running on a headless server with a boatload of other services. Ideally whatever I use will be accessible over the network.
I think at the time I started, not everything supported Intel cards, but it looks like llama-cli has support form Intel GPUs. I’ll give it a shot. Thanks!


Thanks for the link. I was gonna ask if you were a writer, heh.
I agree. The tone of the ads this year felt almost like lampshading. Like if we acknowledge the problem, we’re wise to what the audience is feeling, but we’re not going to do a damn thing to address it. It’s just something that needs to be done to make the ad feel remotely relevant.
AI is scary, but don’t be afraid of our surveillance device because we acknowledged that AI is scary
AI will sell you ads. Anyway, you’re watching an ad for AI
Work sucks amirite? Why not let us unemploy you?
There’s a wealth gap. Spend money on our stuff.
And I’m not going to even link the He Gets Us ads.
It was an especially interesting case because there was a question of whether the photographer lied about who actually took the picture. So he could either claim the monkey took it an lose the copyright or claim he took it and have it lose all value.


Thanks for taking the time.
So I’m not using a CLI. I’ve got the intelanalytics/ipex-llm-inference-cpp-xpu image running and hosting LLMs to be used by a separate open-webui container. I originally set it up with Deepseek-R1:latest per the tutorial to get the results above. This was straight out of the box with no tweaks.
The interface offers some controls settings (below screenshot). Is that what you’re talking about?

Mouse? I thought that was a koala all these times.



Well, not off to a great start.
To be clear, I think getting an LLM to run locally at all is super cool, but saying “go self hosted” sort of gloms over the fact that getting a local LLM to do anything close to what ChatGPT can do is a very expensive hobby.


Any suggestions on how to get these to gguf format? I found a GitHub project that claims to convert, but wondering if there’s a more direct way.


GO self-hosted,
So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server’s Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.
I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn’t even thought to try and it worked.
But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.
8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?
Nope. Mirrors show you what you looked like when you were 3-4 nanoseconds younger.
But a cattery couldn’t be used in a circuit. It only has a cathode.
Not sure if this is intentional or if the author doesn’t understand the source they’re parodying, but putting multiple brackets around a word (in this case "job”) in a conspiracy/political context can be interpreted as a antisemitic dogwhistle.
Edit: I hope you’ll read my careful wording in that I did not imply the author meant anything by this. I was simply bringing it up in case it was unintentional. I’ve since learned that some people use <<>> instead of quotes.
…Trying to work out if there’s a way you could orient a camera, the subject, and the observer such that they could see a picture of when you were older.


Yo momma’s so fat, she sat on a binary tree and squashed it into a linked list in O(1) time.
What’s funny is that it works even when people know the initial price is bullshit.
A study at MIT had people participate in a silent auction. They were asked to list the last two digits of their social security number and then asked if they would be willing to pay that many dollars for each item before placing their bid.
On average, people with higher SSN digits bid more.


Just missed Bandcamp Friday. Also, get u some flac.


What’s fun is how often this principle is used every day. For example, when you upload a video to YouTube, you’re assigned a unique URL, but it would be too slow to simply add your URL to a list to make sure nobody else uses it. There are millions of videos uploaded every day, and thousands of servers spread all over the world.
Instead, YouTube just generates a truly random URL and depends on the odds of two videos having the same URL being effectively zero.
The same is true for Bitcoin. If you could guess a Bitcoin private key for any currently used wallet, you’d have full access to the funds within that wallet. This can even be done offline. Even if you could guess trillions of private keys per second, the odds of you hitting even one that’s already been used is low enough to be totally secure.
Harry Harlow proven right once again