• a4ng3l@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    2 days ago

    It’s never been illegal at all, you’re oversimplifying the issue. Plenty of use cases that can use US clouds. Not all data is PII and plenty of use cases perform fine by anonymising their data. Also EU countries aren’t that better than US when it comes to state issued privacy violations; we just don’t do dragnet bullshit (yet) but plenty of requests are served as requested…

      • notabot@lemm.ee
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        That’s not the only way to do it. In quite a lot of situations you can, instead, generate artificial data that is statistically similar to the original data set and use that instead. That works well for things like system testing, performance tuning and integration testing. Done right, you can even still pull out useful corelations without risking deanonymising the data.

      • a4ng3l@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        There’s plenty of techniques to avoid re-identification… aggregation isn’t the only way. Especially considering that aggregation if using a stupid dimension isn’t helping at all…

    • Zorsith@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      An alarming amount of data that should be classed as PII isnt. information in aggregate changes classification, PII should be treated the same.

      • a4ng3l@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 days ago

        Depends on the dimension used. « Shoulds » are meaningless. Let’s not assume everyone is doing shit work, awareness is getting there and people are getting more capable to correctly classify data. Anyway assuming correct classification there are techniques that changes classification enough to allow exportation of data to shit countries.