Nothing on that linked page implies it works, just that some countries have done it.
Nothing on that linked page implies it works, just that some countries have done it.
I believe a bot/site that mirrors reddit subreddits to Lemmy communities. It’s anything @lemmit.online
It’s pretty trivial for them to block all major instances though, or even all instances federated with all major instances
You can probably route the audio you want with helvum or qpwgraph. This is what I’ve used to route screen sharing audio in the past
I just read the entire article you linked and it seems pretty inline with what I was taught about what happened in school. And it definitely doesn’t make me sympathetic to the PLA or the government.
If you’re using a non-phone camera, I don’t think there’s any upsides to using digital zoom. Just take the photo full res (and raw if you can) and then do the cropping/zooming in gimp, etc. Then you have full control of the cropping, and you might even be able to use a better upscaling algorithm than the camera’s built in.
This is not true on phones because when you zoom in digitally the phone does something called super-resolution imaging where it takes a whole sequence of photos and then stacks them to try and fill in the missing information. That tech hasn’t caught up to dedicated cameras yet.
What camera are you using?
If you’re on Linux, its on flathub: https://flathub.org/apps/org.ardour.Ardour
This is what I’ve been going through, sold as teaching rust to people who already know other languages. I’m not very far in at all, but it seems decent? https://google.github.io/comprehensive-rust/
The important thing is that the game itself uses vulkan. I believe that’s entirely independent of whether your window manager uses vulkan. If your games work, then they’re probably using vulkan. They won’t work any better if sway does as well.
This is a really fantastic explanation of the issue!
It’s more like improv comedy with an extremely adaptable comic than a conversation with a real person.
One of the things that I’ve noticed is that the training/finetuning that’s done in order to make it give good completions to the “helpful ai conversation scenario” is that it flattens a lot of the capabilities of the underlying language model for really interesting and specific completions. I remember playing around with gpt2 in it’s native text completion mode, and even with that much weaker model, it was able to complete a much larger variety of text styles without sliding into the sameness and slickness of the current chat model fine-tuning.
A lot of the research that I read on LLMs is using them in the original token completion context, but pretty much the only way people interact with them is through a thick layer of ai chatbot improv. As an example for code, I imagine that one would have more success using an LLM to edit your code if the context that you give it starts out written like it is a review of a pull request for the code, or some other commentary of a form that matches the way that code is reviewed in the training data. But instead of having access to create that context directly, we have to ask for code review through the fogged window of a chat between an AI assistant and a person discussing code. And that form of chat likely isn’t well represented in the training data.
For gaming like that (remote over the network), I’d recommend sunshine and moonlight. They work great if your network can handle the upload
I would understand working to be accomplishing its stated goal, which is increasing birthrates. I believe there’s very limited evidence for that.