• 2 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: May 8th, 2023

help-circle
  • Probably more likely they dial more calls than they can scam on the basis that a silent hang up call costs them only the cost of connecting the call, but their scammer’s wages cost them more if not enough people answer and there is no one for the scammer to speak to.

    It’s essentially putting the cost of uncertain numbers of people answering onto the victims rather than the scammer - selfish, but so is scamming people!

    Telemarketers do the same thing, although at least they often have to fear their local regulators in many countries if they do it too much, while scammers are criminals who are going to break the law anyway, so I suspect most silent calls are probably scammers.


  • more is a legitimate program (it reads a file and writes it out one page at a time), if it is the real more. It is a memory hog in that (unlike the more advanced pager less) it reads the entire file into memory.

    I did an experiment to see if I could get the real more to show similar fds to you. I piped yes "" | head -n10000 >/tmp/test, then ran more < /tmp/test 2>/dev/null. Then I ran ls -l /proc/`pidof more`/fd.

    Results:

    lr-x------ 1 andrew andrew 64 Nov  5 14:56 0 -> /tmp/test
    lrwx------ 1 andrew andrew 64 Nov  5 14:56 1 -> /dev/pts/2
    l-wx------ 1 andrew andrew 64 Nov  5 14:56 2 -> /dev/null
    lrwx------ 1 andrew andrew 64 Nov  5 14:56 3 -> 'anon_inode:[signalfd]'
    

    I think this suggests your open files are probably consistent with the real more when errors are piped to /dev/null. Most likely, you were running something that called more to output something to you (or someone else logged in on a PTY) that had been written to /tmp/RG3tBlTNF8. Next time, you could find the parent of the more process, or look up what else is attached to the same PTS with the fuser command.



  • Data being public (and privacy in general) shouldn’t be ‘all or none’. The problem is people joining the dots between individual bits of data to build a profile, not necessarily the individual bits of data.

    If you go out in public, someone might see you and recognise you, and that isn’t considered a privacy violation by most people. They might even take a photo or video which captures in the background, and that, in isolation isn’t considered a problem either (no expectation of privacy in a public place). But if someone sets out to do similar things at a mass scale (e.g. by scraping, or networking cameras, or whatever) and piece together a profile of all the places you go in public, then that is a terrible privacy violation.

    Now you could similarly say that people who want privacy should never leave home, and otherwise people are careless and get what they deserve if someone tracks their every move in public spaces. But that is not a sustainable option for the majority of the world’s population.

    So ultimately, the problem is the gathering and collating of publicly available personally identifiable information (including photos) in ways people would not expect and don’t consent to, not the existence of such photos in the first place.




  • Phones have a unique equipment identifier number (IMEI) that they share with towers. Changing SIM changes the subscriber ID (IMSI) but not the IMEI (manufacturers don’t make it easy to change the IMEI). So thieves (and anyone else) with the phone could be tracked by the IMEI anyway even if they do that, while leaving the phone on.

    In practice, the bigger reason they don’t get caught every time if they have inadequate opsec practices is that in places where phone thefts are common, solving them is probably not a big priority for local police. Discarding the SIM probably doesn’t make much difference to whether they get caught.


  • Wait times are as high as 2 months (depending on how old the phone model is, etc…), and even as a regular Xiaomi customer, their support never seem to allow anyone to skip the wait, even if for example they broke their old phone and want to set up a new one like the old one (ask me how I know). During that period, MIUI is like a data collection honeypot, sucking up your PII and serving you ads.

    It might be ‘normal’ now to Xiaomi customers to wait to be able to unlock the phones that they have paid for and own (perhaps in the same sense someone in an abusive relationship might consider getting hit ‘normal’ because it has been happening for a while), but the idea that the company who sold you the phone gets some say on when you get the ‘privilege’ of running what you like on it, and make you jump through frustrating hoops to control your own device, is certainly not okay.

    If they just wanted to stop reselling phones with non-Xiaomi sanctioned malware / bloatware added, making the bootloader make it clear it is unlocked (as Google does, for example) would be enough. Or they could make a different brand for phones that are unlocked, using the same hardware except with a different logo, and let people choose if they want unlocked or walled garden.

    However, they make money off selling targeted ads based on information they collect - so I’m sure that they probably don’t want to do any of those things if they don’t have to, because they might disrupt their surveillance capitalism.


  • Xiaomi phones used to be good for custom ROMs, but now they try to stop you unlocking the bootloader by making you wait an unreasonable amount of time after first registering the device with them before you can unlock. Many of the other vendors are even worse.

    So from that perspective, Pixel devices are not a terrible choice if you are going to flash a non-stock image.


  • I once worked for a small ISP that decided to enter the calling card business. I built them a voice prompt system on top of Asterisk that made received PSTN calls over PRI and made outbound VoIP calls, all metered to cards with a unique number and a balance, and a UI to activate them. The business got boxes of physical cards printed, with a plan to sell them to convenience stores.

    They hired a salesperson (AKA worst coworker) to sell the boxes of cards. This coworker then sold many boxes of activated cards to many small stores at an unauthorised discount (below the level of profitability), for cash rather than the approved methods for retailers to buy them, and then apparently spent said cash at the casino. The business had to honour the cards (i.e. not deactivate them) at a big loss to avoid ruining their reputation, since the buyers apparently did not know the deal was dodgy. His tenure was, suffice to say, not long, but in his short time there, he managed to put the business under financial strain and it eventually went into liquidation.


  • I think it is more a drip pricing scam to increase revenue for the airline, especially when it is for things that don’t have an incremental cost for the airline. Can’t compete with other airlines? No problem, advertise a lower price than your competitors, but then dream up things your competitor offers as included that almost ever customer wants (and perhaps even try to create problems for customers but charge to make them go away). Now you get customers in the door for the lower initial price, but almost all customers end up paying more than if they had just gone with the competitor.

    It is not beneficial to the customer because it reduces the efficiency of the market (and hence competition) by making it harder to quickly compare prices and get the best overall offer.

    Other industries do the same - insurers with exclusions, retailers trying to make warranties an optional extra (where regulations allow them to do it), ISPs trying to drip price extra charges.

    If a business has absolute upfront honesty about all extra charges, but they genuinely have a reason to charge extra for some customers doing things that cost them significantly more, then that is a different matter, and not necessarily bad for their customers. But the second they try to conceal part of the price and progressively reveal it, it really is a form of scam.


  • Lemmy instances are no different to any other website, in this regard. To ‘take over’ an instance would be to take over hosting of a website - which would mean either re-pointing the DNS somewhere else (and getting a copy of the database), or to take over the hosting of it (e.g. if it is hosted with a cloud provider, or you are physically taking possession of the hardware).

    Taking over the DNS in nearly any gTLD or ccTLD (short of some kind of compromise at least) generally requires one of the following: 1) a process initiated by the registrant, or 2) proving that you are the registrant, or 3) proving you owned the trademark in the domain before it was registered or 4) waiting until the domain expires, any grace period is up, and then being first to register the domain.

    If the admin is completely gone, and they are the individual owner of the instance, you could wait for (4) and try to drop-catch the domain. But domains are generally registered for a minimum of 1 year, and often up to 10 in advance, so it could be a long wait. And even then, you wouldn’t have a copy of the database. It is quite possible the actual hosting of the instance has not been pre-paid for anywhere near that long (or might fail in any number of ways, or fall out of date and get compromised, or need some kind of manual intervention following a problem), so it could go down and not come back up a long time before the domain name expires.

    If the admin has made same arrangements in advance for takeover of the instance in case they are unable to continue, the picture can be a lot better. They might, for example, if they created a legal structure (company, not-for-profit organisation) for the instance, and that is the registrant and owner of cloud resources etc…, then maybe other members of the organisation could call a Special General Meeting (or whatever similar procedure the org’s rules set up in advance say), appoint a new president, and then the new president can prove their authority to any providers involved to get access to the organisation’s resources (e.g. hosting server). Or maybe they could set up a dead-man’s switch system to automatically email credentials for all the resources if they don’t check in for a couple of weeks; or give a few trusted people the credentials (possibly encrypted with something like Shamir’s Secret Sharing to ensure n of m trusted friends - e.g. any 7 of 10 or something, need to combine their secrets to get the credentials) to take over the instance. But any of those things would require they have planned for that eventuality in advance.

    Otherwise, of course, all users of the instance could just chose another instance (possibly posting a last message with a public key to that instance, to establish the link to their new account).


  • The proposal doesn’t say what the interface between the browser and the OS / hardware is. They mention (but don’t elaborate on) modified browsers. Google’s track record includes:

    1. Creating SafetyNet software and the Play Integrity API that create ‘attestations’ that the device is running manufacturer supplied software. They can pass for now (at a lower ‘integrity level’) with software like LineageOS combined with software like Magisk (Magisk by itself used to be enough, but then Google hired the Magisk developer and soon after that was dropped) and Universal SafetyNet Fix, but those work by making the device pretend to be an earlier device that doesn’t have ARM TrustZone configured, and one day the net is going to close - so these actively take control away from users over what OS they can run on their phone if they want to use Google and third party services (Google Pay, many apps).
    2. Requiring Android Apps be signed, and creating a separate tier of ‘trusted’ Android apps needed to create a browser. For example, to implement WebAuthn with hardware support (as Chrome does) on Android, you need to call com.google.android.gms.fido.fido2.Fido2PrivilegedApiClient, and Google doesn’t even provide a way to apply to get allowlisted for (Mozilla and Google are, for example, allowed to build software that uses that API but want to run your own modified browser and call that API on hardware you own? Good luck convincing Google to add you to the allowlist).
    3. Locking down extension APIs in Chrome to make it unsuitable for things they don’t like, like Adblocking, as in: https://www.xda-developers.com/google-chrome-manifest-v3-ad-blocker-extension-api/.

    So if Google can make it so you can’t run your own OS, and their OS won’t let you run your own browser (and BTW Microsoft and Apple are on a similar journey), and their browser won’t let you run an adblocker, where does that leave us?

    It creates a ratchet effect where Google, Apple, and Microsoft can compete with each other, and the Internet is usable from their browsers running unmodified systems sold by them or their favoured vendors, but any other option becomes impractical as a daily driver, and they can effectively stack things against there ever being a new operating system / distro to compete with them, by making their web properties unusable and promoting that as the standard. This is a massive distortion of the open web from where it is now.

    A regulation that if hardware has private or secret keys embedded into it, hardware manufacturers must provide the end user with those keys; and that if they have unchangeable public keys embedded and require that software be signed with that to boot or access some hardware, manufacturers must provide the private keys to end users. If that was the law in a few states that are big enough that manufacturers won’t just ignore them, it would shut down this sort of scheme.


  • Yeah everyone using Cloudflare is definitely centralisation, but maybe a kind of centralisation that allows for easier switching to something else if Cloudflare gets too crazy.

    DDoS is a war of attrition - and the best way to win a war of attrition is to make it cost much more than $1 to make you spend $1, and to be able to outspend the attackers (e.g. the whole community bands together to support the victims against the attacker). I think the best response depends on who is attacking.

    Network level DDoS is likely using stolen bandwidth - but the person directing the attack is probably paying someone for the use of it (i.e. they didn’t compromise the equipment themselves, someone else builds botnets and rents them out). If you can identify what traffic is part of a DDoS, you can track down where it is coming from, and alert the owner of the network where it is coming from, which hurts the person providing the services to the attacker quite a lot. If I have a reputation of: if you attack me for someone else, I’ll cost you a significant part of your business that will take you months to build back up, then you are not going to offer that service cheaply, or even at all.

    Application level DDoS usually relies on amplification of cost - I do something relatively inexpensive (like send a packet opening a connection), and it makes you do something really expensive involving databases, disk IO etc…; a good mitigation is to redesign the API to flip that on its head, so you do something expensive, and I do something relatively cheaper for you. There is an open issue about using Hashcash to do just that at: https://github.com/LemmyNet/lemmy/issues/3204 - the downside is that it forces users (even on mobile devices) to use more compute / power for every request to Lemmy, but I think there is a balance that can be struck there where it isn’t too bad for users, but makes that type of attack infeasible.


  • It comes down to how you define winning. Define L(X_i) as the ‘loss’ of warring party i at the end of the war - positive loss means that party i is worse off at the end of the war, while negative loss means party i is better off at the end of the war. If you are playing a board game, the rules might say someone always wins, and it is party i with the lowest L(X_i). But in a real life war, if party 1 started the war, their objective is probably that L(X_1) < 0 - i.e. they started the war to profit, not just to lose less than other parties. So in a real war, it is fair to say a party i loses if L(X_i) > 0, and wins if L(X_i) < 0. So to say no-one wins a war with parties P is to say \forall_{i \elem P} L(x_i) < 0.

    Now in the case of wide scale nuclear war, parties likely launch all their nukes at each other within minutes so they launch before their capability to launch is destroyed. All major cities in all parties will likely be destroyed, and contaminated with nuclear fallout that may take years to decay to safe levels. Particulate thrown up by explosions would likely block out the sun and spoil all agriculture on earth for years (nuclear winter). Most people on earth would die. Government and civilisation would be unlikely to be able to continue under such circumstances - people might at least fall back to tribal organisation for a while.

    So a widescale nuclear war would almost certainly lead everyone with a positive loss function - hence ‘no winners’.




  • Linode and Vultr are both cloud providers outside the big 3 (AWS, GCP, Azure) that are a fair bit less expensive and have a range of instance types you can spin up, plus custom block storage services - they have a few regions to pick from so you can often get one with a low ping to you.

    If you want cheaper than that, and are okay with small providers who might not always be as reliable, try something like Lowendbox and check the listings there.



  • Also the default nginx forward proxy configuration runs nginx on port 1236, not 80, inside the container, and doesn’t have any kind of TLS configuration.

    I think most people likely have another layer of proxy (e.g. on the host) in front of it, instead of directly exposing the forward proxy from Docker Compose - that’s what I do - and that’s where I do TLS with a LetsEncrypt certificate.