Looks like
../images/loading.gif
…/ means previous directory so if CSS is public/css you want public/images.
Looks like
../images/loading.gif
…/ means previous directory so if CSS is public/css you want public/images.
Ah I didn’t think of the image references! Yeah probably better off downloading the whole library when it requires the other assets too. Its likely soft linking at point of the css file and you’d need assets paths stored in reference to the CSS file.
How would things break from including it yourself? Just download the file from the links in your post and include them on your webserver it shouldn’t require any code changes beyond that. You also never know if they would take down an old version or if some outage will occur so I’d personally rather host them instead of relying on additional servers to work.
If you are still using the setup in the post with what I suggested that’d probably be why. You wouldn’t need a tunnel container anymore, host networking, nor DNS settings. Just a web service that you want to expose. Is the host able to resolve the same domains properly?
Its insider trading when they buy at the lowest point and sell when its back up because they knew of one or both of these events. Everyone knew it would crash. Everyone did not know he’d lift tarrifs at a specific time where the market would “recover”.
Not trying to subvert your issue but why not use something that makes the tunnel an easy to make ingress the kubernetes way? I don’t use cf tunnels so I havent use this but it seems to be a proper solution.
https://github.com/STRRL/cloudflare-tunnel-ingress-controller
Edit: An operator linked in that github project could be useful too if you want to support udp and such https://github.com/adyanth/cloudflare-operator
I would do a charge back with my bank / credit card. He didn’t get the goods or service he bought and it was because of the company so I’d say there is a chance. It also would hurt MSG a tiny bit.
I didn’t think of using read only replicas, that would probably be a very good way to go since its probably 80%+ of actions are reads. Thanks for answering, I am excited to see the how lemmy grows and thanks for all the devs hard work!
I 100% agree with this and there have been great strides since I started using Lemmy ~v0.17! That said at some point optimization will have lower returns and have a higher effort to put into and once a community grows extensively it likely might not be enough, so I was curious to what you guys were thinking at that point, something like Ctius for sharding postgres?
It is a k8s cluster and using ceph for all of my storage so the latency from that I bet is the largest reason and upping the memory offsets the disk writes. i also have another postgres DB syncing as a fallback for high availability. Fortunately after tuning the database and giving it enough RAM my instance has been running pretty stable for over a year without any changes.
I am also using less powerful computers for the entire infrastructure (not server grade) which brings to the point of having horizontal scaling on database I imagine will be a growing need with growing instances, communities, and users since it can be cheaper to run multiple smaller spec servers rather than a single with the added benefit of high availability.
Yeah I used pgtune as a base and found more memory needed to be assigned to certain spots especially to keep federation with bigger instances, otherwise timeouts would occur resulting in my instance being constantly behind.
That said I read postgres 17 is much more memory efficient, though I have yet to move my lemmy database to it yet since its the largest haha.
On the server perspective, I have a question, what are your thoughts for horizontal scaling on the database? This seems to be the biggest limitation and requiring higher spec hardware to scale especially for the bigger instances.
My tiny instance for example I give over 20GB of RAM just to postgres to make it perform efficient enough.
There is an issue open requesting this… I been following it for a while.
The steam deck already does limit charging to 80% after being plugged in for an extended period of time so the battery cells will still have a charge but no be in the harmful range.
Like that we are speedrunning to making Earth into Mars?
I imagine it’d make the business more expensive low orbit satelites slowly fall into the atmosphere and are supposed to burn up after a couple of years. I imagine with lower orbits that they’d fall sooner and you’d have to launch more to sustain your system which then produces more pollution and perpetuates the problem.
Edit article says more space junk and slower burning up in the atmosphere as an effect so that’s interesting. If it becomes a space junk graveyard I imagine satellites will more frequently get damaged by them and become junk themselves?
You got 20 years of dailies backlogged! All the FOMO on the seasons!
Their desktop version of office has more features than the cloud web versions so this doesn’t sound that bad. Also might mean the O365 small business license might be able to use a desktop client now instead of being web only.
I always thought it was a bad idea for AWS to make the buckets unique globally. Attach the AWS account Id so it would always be unique, you can name a bucket whatever you want, and this attack vector wouldn’t be possible (unless if you are AWS I guess)
Yeah I see nothing that using a local version that would cause that… Hmm next I would try mixing CDN ref and local ref for js/CSS try to pinpoint what one is making it buggy.