Thanks for maintaning this fork, this was the last missing thing on lemmy for me
blog: thomasdouwes.co.uk
I also run some bots:
@FlagWaverBot
Thanks for maintaning this fork, this was the last missing thing on lemmy for me
I have done some testing and I found a few reasons I’m having issues with webtorrent:
The only reason they where working at all is because they were downloading from the HTTP URL in the torrent file, P2P was not working at all.
To download the webtorrent from the blender instance I need to have the video watched in my browser to peer with the webtorrent client, the instance peers don’t work on non-peertube webtorrent clients.
The reason instant.io was broken is my adblocker was blocking the tracker.
The tracker in my peertube instance is broken.
EDIT:
I was a bit wrong here, there are two different formats in peertube: webtorrent and HLS. I was getting confused why the video on my instance (HLS) and the one on the blender one (webtorrent) was behaving differently with webtorrent clients. They are completely different formats so that makes sense now.
Webtorrent seems to have some issues with peer discovery. I’ve tried the instant.io site they have linked on webtorrent.io and I can’t get it to download or share anything, the desktop client managed to download a torrent from my peertube instance over normal BitTorrent but I can’t share it over webtorrent. I downloaded a video from my peertube instance using btorrent.xyz over webtorrent but I can’t seed new files because the peers don’t find each other. when I use a webtorrent with a tracker (like peertube) it works fine but how were sites like instant.io supposed to discover peers without trackers? I don’t think DHT exists for webtorrent yet.
You can manually seed videos on instances using redundancy but I was thinking automatic redundancy for watched videos might be a good idea, I guess you can do automatic redundancy for entire instances but that would take up a lot of storage space.
One of the nice thing with BitTorrent is the high reliability so I assumed that was what peertube was trying to do, I guess the idea is not to provide data redundancy but to split load instead?
why? if 5 instances are seeding the video, clients should be able to download from all 5 instances and spread the bandwidth usage right?
Why not also use the instance to re-seed? it could keep seeding after the visitor closed the video.
Would it not make more sense if your instance downloaded and redistributed the torrent? then you could keep seeding after the tab closed. it also wouldn’t leak your IP then.
What about peer discovery? I opened that webtorrent website in two browsers and they didn’t peer, is that demo real?
Download ML thing.
make new venv.
pip install -r requirements.txt
.
pip can’t find the right versions.
pip install --update pip
.
pip still can’t find the right versions.
install conda.
conda breaks for some reason.
fix conda.
install with conda.
pytorch won’t compile with CUDA support.
install 2,000,000GB of nvidia crap from conda.
pytorch still won’t compile.
install older version of gcc with conda.
pytorch still won’t compile.
reinstall the entire operating system with debian 11.
apt can’t find shitlib-1.
install shitlib-2.
it’s not compatible with shitlib-1.
compile it from source.
automake breaks.
install debian 10.
It actually works.
“Join our discord to get the model”.
give up.
oh wow that’s strange. I cannot imagine what they must have done in the nginx config to do that. I guess there isn’t anything you can do until lemmy.ml fixes their IPv6 then. I just checked my logs and lemmy.ml isn’t federating with my instance anymore. Thats very bad! Also explains the lack of content i’ve been seeing.
EDIT: Ok nevermind, lemmy.ml is federating with me. I just connect to it over IPv6
If you know how to use greasemonkey there is a script to make lemmy look more like old.reddit here Personally, it fixed a lot of the UI issues for me.
What do you mean by 3? you shouldn’t need authorisation to join a federated community.
Look at the usernames in the replies to this comment
Nah, I can’t see any reason to make more than one account.
The file you downloaded is a compressed JSON file, it’s not something you can really just look at. But it contains all the data needed to build a nice UI around.
I don’t know what OS you are on but on linux you can run zstd -d -c file.zst | jq .
and it will print everything in the file. It’s not really readable though. Also it doesn’t have any of the media content, only the text
That assumes the mods do. I fear reddit will turn into another tinypic situation. even if it isn’t an image host there will be pages of answers on forums and stack exchange pointing to dead reddit links. The fustration of finding a 10 year old forum post of someone having the same issue as you only to have the only answer point to a dead link is incredible.
I hate reddit. But it feels like the library of Alexandria burning down (yea I know). All those google search results and educational subreddits that are shutting down forever, and because they are too small reddit won’t force open them again.
A lot are in the pushshift archive, but that cuts of at 2022. Also, it doesn’t include a lot of the smaller subreddits.
I have had my PC running 24/7 with multiple VPNs to avoid rate limits downloading as much as I can before the API dies, but with some blackouts moving forward a day I have already missed a few.
Jesus, on the higher end the value gets absurd. 128Gb of RAM with 2x8Tb hdds for €52.44 a month. Shame the lower end servers aren’t quite so good value. Maybe I should just find some more money
Looks good, but I’d rather pay for a server that use oracle. Seems a little too good for free. Especially for Oracle. The satellite image processing I want to do already has some trouble with my 48gb of ram so 24 might not work. Also I wanted to use the storage for an offsite backup. Thanks for the suggestion though!
I think protonmail might allow you to make a mail account without anything.
If my fingers prune I’m going to die or something