The thing with Wayland, is that if you don’t use gnome/kde/tiling-compositors, your options are pretty limited, actually the closest to an independent stacking compositor would be wayfire.
You can make things work, but with a lot of nuances here and there, to the point you get tired of dealing with them… gtk4 not paying attention to GDK_DPI_SCALE, no file manager recognizing gvs except for nautilus, which guess what, it’s already gt4, electron apps built on top of latest (22) electron and gtk3, not paying attention to GDK_DPI_SCALE either, wf-shell doesn’t offer tray, and waybar does, but waybar doesn’t offer a launcher button, but wf-shell does, and so on…
I actually recently got around a week trying to make myself comfortable with wayland, but for me it lags behind from Xorg.
But perhaps the author is right, and if Xorg would have been dropped altogether, wayland would be in better shape, but that we won’t ever know… On the other hand who knows if someone would have forked it, or revive it after noticing too many users complaining. It’s been many years since its introduction, and wayland is still not up to being a workable replacement for Xorg for a good amount Xorg users (usually not considered in the bulk numbers though). Every now and then I try it, when reminding myself of this “the register” post, but haven’t gotten to the point of really wanting to migrate…
Hopefully things will change, but till then, I really hope Xorg keeps it up.
I read somewhere, loonson was going to change ISA to risc-v, can’t remember when. But this article shows they’ll keep on mips, apparently (“his company plans to build out a software ecosystem that will allow Chinese users to run more applications on the LoongArch ISA natively”)?
I think they should move to risc-v. I know mips ISA was open sourced as well, one or two years back, but still…
I have mix feelings with blog post…
While decentralization is good for many reasons, and distributed systems are even better than decentralized ones, saying fascism, providing nazis a place to gather, and giving voice to extremists, are not a decentralization problem or weakness, and the post sort of accepts that as given and sort of provides workarounds through quarantining and the like.
People will always find a place to gather, and make themselves feel. Before there was centralized social media places, and even before internet, there were fascists and nazis. And when big corps started to van people they consider unacceptable on their spaces, then they built and found other places for them, so whether centralized or not, people will find ways to gather and share their thinking, whether that’s something big corps, big media, or common sense find those thoughts acceptable or not.
The real problems on centralized services are giving too much control and power to a few ones, who have their own agenda, their own interests, their own criteria, without giving a damn on what’s true or not, what’s common sense or no, what benefits majority or not. Big media has always concentrated too much power, and now big tech corps do the same, and they interests are not there to represent everybody’s interest. The vanning culture of Today, the censoring culture of Today, the dislike of different opinions of Today, are not just because of centralization, but sure are empowered and accelerated by centralization, and those are as corrosive to society as fascism and extremism, actually that mono-thinking and mono-culture might actually be a sort of those things culturally thought as bad. Not to count how centralization affects privacy, and adds way more risk to data we would like to preserve, given single points of control, single points of failure, and single points of risk.
To finish, I hoped much more about such title, but oh well…
Well risc-v is a nice open ISA, not open source HW. Even when some companies post the chisel code, or the rtl one on github, doesn’t necessarily make it open source. There are vendors with proprietary risc-v based IP already.
On the other hand really open HW is really hard to happen. The Fabs recipes for different technologies, is like secret sauce. And securing the produced HW vs the chisel/rtl code is really hard, starting form the fact the business ecosystem is mostly fabless (except for Intel and adn a few IDS companies around), and companies designing usually include proprietary IP from vendors (who knows what they add into their IP), so no way to verify that code, and it’s impossible for every company to design everything (too much complexity and too much expertise on a diverse ecosystem of different technologies). Then when having the rtl and gatesims somehow verified, companies send to a 3rd party for the “physical” design, including tracing and routing plus “mask” design, and as long as there’s equivalence between the original design (which already incorporated proprietary IP) and the one send back by the 3rd party company, then it goes to the Fab (though the 3rd party company could have introduced something, obscured somehow so equivalence is not broken), and then finally, even if the Fab doesn’t introduce anything the original company is not aware of, in the end the recipe for whether 14nm, 7nm, 2nm, and everything related to how the company achieves what it does, is not open either (there are of course theory, papers and talks about the processes, but what the Fab finally does, has to be protected against competitors). All the original company can do, is to verify the functionality of the resulting Si, starting from postSi verification all the way down to product verification. But several key pieces were even proprietary IP and libraries to start with, the thing goes through several hands until it lands to the Fab, and what ended in the final Si is not fully known, all you can do is verify under certain scenarios the thing does what it was intended for, :)
So in the end fully open HW is really hard to get. But an open ISA is better than current status quo, and hopefully it might motivate for more openness on the HW industry…
In general, I don’t like the idea of having flatpak, snapcraft and appimage, packages. First, and this is different between them, but in the end they all suffer the same one way or another, there are huge binary dependencies blobs, whether coming with the same app, or having to be installed from the package provider. At some point I tried to install Liri from Flatpak, and it was a nightmare of things having to be installed, when I already had most things natively built by the distro I used.
As opposed to opinions from Linus himself, I prefer SW to be built from the same system libraries and dependencies, rather than each app coming along with their own set of binary dependencies. Getting GNU/Linux to behave like MS-Win, where you can install whatever binary, from whatever source, and perhaps even duplicating a bunch of stuff you already have on your system is crazy. One thing that gets solves easier, when having contained apps, is when depending on same things but from different versions, and that to me is not ideal either. To me, as done by distros, one should avoid as much as possible having different versions of same SW, but if in need, then rename one including the version as part of the name, or something, but mainly avoid having a bunch the same thing with multiple versions all over. Guix handles that more elegantly of course, but I haven’t had the time, neither the guts to go for Guix yet (still in my ist of pending stuff).
The other thing, is that although now a days everything comes with a signature to check, on distros provided packages, which are built from source, besides minimizing the amount of stuff needed, one can always look at how the packages are built (arch and derivatives through the PKGBUILs and companions), tweak and build oneself (eg, currently fluxbox head of master, and from a while back, doesn’t play nice with lxqt, then with the help of fluxbox devs I found the culprit commit, revert it, and still apply the same distro recipe with my own patch, and moved on). No matter being signed, binary packages are not as flexible, that besides the fact several just proprietary, and one might not even be aware, since the move is to be more MS-Win like, even with auto updates and such…
Building having in mind minimal systems and ecosystems, and have mostly free/libre or at least open source SW, makes thing much better for me. One can end up with bloated huge systems, if wanted, but at least not with bunch of duplicates unnecessarily.
Ohh well, if only TOR exit nodes were not able to be used to spy on users, if government agencies would refrain to host exit nodes (and other nodes), and if exit noes wouldn’t allow for some exit nodes monopolizing (at times single user had been found controlling good amount of exit nodes), neither hijacking.
I’m really glad people investigates on how to protect privacy better, and TOR has contributed to that. But although the efforts to sanitize the TOR network, as a user, one really doesn’t know if one is getting the opposite effect. Agencies with resources are also able to deanonimize TOR users… So it’s really hard now a days, as a user, to go trust any internet mechanism intended to protect one’s privacy. VPNs are no better, perhaps even worse, since now it’s about trusting a centralized service which has it easier to spy on users.
Privacy on the net is near impossible now a days, :(
I use silence, :) With it I might block SMS from phone numbers, once I got them, but I was looking for something which would have prevented the SMS to reach me in the 1st place, hehe. Like what “Yet Another Call Blocker” is meant to do for phone calls. Yet, “Yet Another Call Blocker” still allows some phone calls I really would have preferred to be blocked, but it’s better than nothing I guess, it has blocked some calls…
DuckDuckGo? No thanks, its search engine based on blink, and now a browser app…
Better use Mull, or any other FF based browser…
$1500 the laptop: tomshardwate reference. Well, it’s been a while I don’t buy any laptop for personal use, but I’m wondering if there are $500 laptops, which are worth acquiring.
That said, please remember risc-v doesn’t mean open source CPU, it just means open source ISA. Actually there are many vendors now offering risc-v IP CPUs and others, such as Cadence, Siemens (in the past Mentor), and others, but those are not open source. And even if the CPU was open source, there are other components which might not be. And there’s the thing about firmware binaries requirements…
Looking for a fully open source, both HW and SW, without binary blobs also, it’s sort of hard this days. Hopefully that’s not way farther away…
If looking for risc-v though, Roma it is though, since there’s nothing close to it yet, :)
I’ve been looking for a p2p alternative, which would allow a simple workflow. So I had some hope when noticing radicle. But it builds on top of the blockchain hype, I’m afraid. This cryptopedia post shows things I really don’t like.
It’s true git
itself is sort of distributed, but trying to develop a workflow on top of pure git
is not as easy. Email ones have been worked on, but not everyone is comfortable with them.
A p2p using openDHT would have been my preferred approach. But any ways, I thought radicle could be it. But so far I don’t like what I’m reading, even less with whom they are partnering:
Radicle has already partnered with numerous projects that share its vision via its network-promoting Seeders Program (a Radicle fund), including: Aave, Uniswap, Synthetix, The Graph, Gitcoin, and the Web3 Foundation. The Radicle crypto roadmap includes plans to implement decentralized finance (DeFi) tools and offer support for non-fungible tokens (NFTs). With over a thousand Radicle coding projects completed, this RAD crypto platform has shown that it’s a viable P2P code collaboration platform, one that has the ability to integrate with blockchain-based protocols.
Perhaps I’m just too biased. But if there’s another p2p, hopefully free/libre SW, and non blockchain, then I’d be pretty interested on it…
well, it seems soucehut will have a web based work flow, or so it seems from this postmarketos post:
We talked to Drew DeVault (the main developer of SourceHut) and he told us that having the whole review process in the web UI available is one of the top priorities for SourceHut
…
SourceHut is prioritising to implement an entirely web-based flow for contributors.
This things don’t happen in one day, so don’t hold your breath yet, but it seems it’s coming at some point…
From a technical stand point you’re right, particularly because of limitations imposed on cisc x86 instructions decoding, which given the complexity don’t tolerate as many concurrent decoders as opposed to risc, besides many other advantages of risc. But also, from a technical stand point, take into account that the current intel x86 implementations are really risc ones wrapped around to support cisc. However there are several limitations still, particularly while decoding.
I believe apple architecture and design decision, go just beyond risc, like sharing the same memory among different processors (cpus, gpus, and so on). That gains M1, M2, and coming SoCs an edge… So it’s not just about risc…
But my opinion was more about current sanctions, technology banning, and all that sort of artifacts used to restrict and constrain Chinese technology. SoCs are not as short term as one might think, since it’s not cheap investment. So to me, since several years back, they should have focused on risc-v, to avoid such non technical huge problems. They have all resources necessary to pursue a different path, than the more costly and limiting one. Of course changing takes time, but again, they have everything they need to do so. That’s why it was a surprise to see investment on x86 compatible CPUs. But hey, they know their business better than anyone else, :)
why developing a x86 compatible arch? Wouldn’t it be better for China to focus on Risc-V? They even had Loongson, but it’s MIPS based…
Not sure if intel will demand something, or if it actually licensed something (not their business model), so short term might help keeping some x86 SW, but mid term and long term, this doesn’t make much sense, does it?
Well, so far, DHT doesn’t seem to be avoidable on p2p (distributed) mechanisms… At any rate, there’s gnunet, which also depends on DHT, but it’s not developed by “Ethereum enthusiasts”.
True, but here it’s the thing which is somehow concerning… GNU has attempted to bring sel4, and other interesting ukernels to hurd, with so few hands, and no one really getting interested. And now that Google tries it, oh, how innovative…
I like the move to ukernels, and focus on clean by design. However as mentioned by others, it’s still Google… We’ll see. It’ll be interesting to see where this new effort leads. Will this take over Google’s mobile OS eventually, or chromeOS, or would there be a PC kind on Google OS?
I believe there’s a lot of misunderstanding of what’s freeSW, what’s openSW, and what debian repos have been providing all along.
Debian has been providing a “non-free” repo for all versions they keep in their repo servers (experimental, unstable, testing, stable) since I can remember.
And to me it’s important to make a difference of what’s freeSW vs. what’s not freeSW, and I prefer to use freeSW, unless I’m forced to use something it’s not freeSW and there’s no way to overcome that.
This is one of the things openSW movements (remember, IBM, MS, Google, and several other corps all are part of, or contribute to openSW fundations, but never had supported the idea of freeSW) have influenced to, and convinced most into. Now the value of freeSW means almost nothing, and most are just happy with openSW. I can’t judge anyone, but just say, this is really sad. And once again I see people treating those defending principles as 2nd class citizens, :(
Calibre has CLI as well, if the GUI is really offensive. On Artix, part of pacman -Ql calibre
:
calibre /usr/bin/calibre
calibre /usr/bin/calibre-complete
calibre /usr/bin/calibre-customize
calibre /usr/bin/calibre-debug
calibre /usr/bin/calibre-parallel
calibre /usr/bin/calibre-server
calibre /usr/bin/calibre-smtp
calibre /usr/bin/calibredb
calibre /usr/bin/ebook-convert
calibre /usr/bin/ebook-device
calibre /usr/bin/ebook-edit
calibre /usr/bin/ebook-meta
calibre /usr/bin/ebook-polish
calibre /usr/bin/ebook-viewer
calibre /usr/bin/fetch-ebook-metadata
calibre /usr/bin/lrf2lrs
calibre /usr/bin/lrfviewer
calibre /usr/bin/lrs2lrf
calibre /usr/bin/markdown-calibre
calibre /usr/bin/web2disk
calibre /usr/lib/
calibre /usr/lib/calibre/
calibre /usr/lib/calibre/calibre/
...
Although ebook-convert
shows on its man page as converting from one epub format to another, it can as well convert to pdf. I don’t know if it’s possible to convert from pdf to epub, but if the calibre GUI’s does it, perhaps some of calibre’s CLIs can do it as well.
Pandoc can also convert from epub to pdf, though my experience with pandoc as a very basic user is that the results, are not of the quality I’d expect, but again, that is without using special arguments, css, and stuff, perhaps advanced users can get the best out of pandoc…
Well sourcehut can be self hosted as well (ain’t it OS anyways?):
https://sr.ht/~sircmpwn/sourcehut https://man.sr.ht/installation.md
That said, sourcehut has privacy features, and libre oriented features gitlab doesn’t. But I understand, as of now, without webUI, as it is, it’s pretty hard to adopt sourcehut, and even when it finally does, having invested on gitlab (and even majority on github), which implies time and resources, might not be an easy thing to try sourcehut any ways.
The the central webUI would be key for major players adoption, and more time as well. It’s been not long ago that debian, xorg, and arch (still in progress), migrated to gitlab, for example. Those migrations are expensive in people resources, and time.
And for regular individuals adoption, besides enabling the webUI, it might be way harder, unless someone contributes to sr.ht resources to allow hosting projects, with no CI support, but for free. It’s hard to get individuals adoption at some cost, even if that’s really a low cost, when there are alternatives, which BTW violate SW licenses, for free, :(
Better? :)
See it all depends, as @Jeffrey@lemmy.ml mentioned, out of the box you can start easily mounting remote stuff in a secure way. Depending on the latency between the remote location and you, SSHFS might become more resilient than NFS, though in general might be slower (data goes encrypted and encapsulated by default), but still within the same local LAN (not as remote as mounting something from Texas into Panamá for example), I’m more than OK with SSHFS. Cifs or smbfs is something I prefer avoiding unless there’s no option, you need a samba server exposing a “shared” area, and it requires MS-NT configurations for it to work, and managing access control and users is, well, NTish, so to me it’s way simpler to access remote FS through SSH on the remote device I already have SSH access to, and it boils down to NFS vs. SSHFS, and I consider easier, faster and more secure, the SSHFS way.
But “better”, apart from somehow subjective, depends on your taste as well.
Also:
Tracking One Year of Malicious Tor Exit Relay Activities (Part II)
I’m wondering if it’s still that bad now a days
FYI: kmail does support office365 + exchange, the thing about the kontact suite is its akonadi DB dependency and all kde deps required. It’s like anything kde you install, brings a bunch of other stuff, usually not anything you end up using…
However I do like how kmail integrates with local gnuPG, rather than Thunderbird’s librnp, which I end up replacing with Sequoia Octopus librnp…
I miss read the article’s title, and yes I didn’t see more signs of a privacy discussion within, though this conclusion:
DRM’s purpose is to give content providers control over software and hardware providers, and it is satisfying that purpose well.
Is precisely one of the things I dislike from DRM… At any rate, my bad with the title…
We don’t have to agree with his criteria, do we? Starting from the fact the most DRM implementation is not open source. Besides, in order to control what you use, it’s implied DRM has access to see what you get, when you get it, where you use it, and so on,. That’s by definition a privacy issue, they can get stats on what you consume, how often you use it, where, on which devices and so on.
But the main issue with DRM, I’d agree, is not privacy itself, it’s an ethical one. And DRM hadn’t prevented piracy ever. It’s main issue is controlling and limiting your use of what you acquire/buy, and disallowing sharing, sometimes even with yourself, disallowing unauthorized devices, or disallowing to see content you should have access to, without having an internet connection to the corp watching and controlling how you use such content or whatever it is protected under DRM.
Of course, the blog comes from someone working on a big corp. At any rate. I guess not all open source supporting people actually supports the FSF, on that DRM is unethical. It so happens I do…
https://www.fsf.org/campaigns/drm.html https://www.defectivebydesign.org https://www.defectivebydesign.org/what_is_drm https://www.fsf.org/bulletin/2016/spring/we-need-to-fight-for-strong-encryption-and-stop-drm-in-web-standards
ohh, there’s a tweet, however I’ll have to see if it’ll allow using openkeychain, instead of TB’s own librnp, which I really dislike on the desktop, and use sequoia octopus librnp (on top of gnupg) instead.
I really don’t like TB’s way to keep and maintain keys (I use the gnupg “external” key for my private key, but still TB’s librnp wants to have it stored in its own DB for no reason, otherwise can’t do a thing). And the same that applies to FF applies to TB, they shouldn’t attempt to keep passwords and keys themselves, better use gnupg, and for passwords something like qtpass on the desktop, and for android, there’s openkeychain and others… And they have watched how it’s possible to do something like the sequoia team does, but I guess they like what they chose to do, :( Using sequoia octopus librnp on mobiles might be rather complicated (it’s somehow tedious to use it on distros not officially supporting it, since TB’s changes lately tend to break octopus, and besides one needs to keep replacing the library on every TB’s upgrade)…
But for those using big corps email providers, then yes, TB on Android is good news. In general it is good to have TB on mobile as well, I just hope they would provide more options to users., extensions for gnupg are all banned (admittedly enigmail was mangling too much into the TB’s code), and they don’t like autocrypt either, so no options…
I prefer k9, but that’s a matter of taste. Out of the gmail affair, well, I really never saw much difference (agreed fairmail is more “standard” in the way it treats directories, but once you get used to k9, you see the benefits on its own ways).
On the gmail affair, well, the route fairmail chose to do the oauth2 authentication for gmail (k9 doesn’t) is through having a google account on your phone, so even if there’s benefit over, say the gmail app, it’s terrible, even if you use LOS4microG or similar. I no longer have a google account, since like 3 years ago, and I recommend de-googling, but I understand it’s hard for many, particularly if using work google accounts, :(
Don’t worry, I checked on BiglyBT before. It does the dual function, it does hook to i2p trackers, which are special, and can as well hook to clear internet trackers, and whatever is being downloaded can be shared and exposed on both. It’s a specialized i2p torrent client, like vuze.
That’s what I was trying to avoid using, :( I’m looking to see if I could use any torrent client, and just tunnel its traffic into the i2p router, like if it were a VPN or ssh tunnel. But so far, it seems you need a specialized torrent client, which can connect as a minimum, to i2p trackers, and use the different i2p file sharing protocols…
If I’m mistaken, let me know, but it seems that’s the only way. At least what I’ve read. Oh well, dI don’t trust VPNs, and I don’t like the idea of using something I don’t trust, unless forced to do it…
Thanks a lot !
ohh, so I can use any torrent client (rtorrent for example), as long as I only use i2p sort of trackers, or so I understand from your post, and also from the wiki, perhaps specifying the binding address and port, or something like that…
Sorry if way too OT, :( What torrent i2p client are you using? I don’t like the idea of vuze with a plugin, neither biglybt. I’m more inclined to something like rtorrent (ncurses, and if used with detached screen, then on any ssh session you can remotely monitor, without needing additional remote accesses or web publishing)…
Ohh, that’s similar to using libredirect which keeps a curated list of frontends.
The bad thing is not keeping local copies of subscriptions wherever supported, so if the user forgets to keep backups, that additional layer does it for the user, and can change to any frontend instance keeping the same subscriptions for all, and even preferences, :). That’s perhaps way beyond the scope of any tool…