Per one tech forum this week: “Google has quietly installed an app on all Android devices called ‘Android System SafetyCore’. It claims to be a ‘security’ application, but whilst running in the background, it collects call logs, contacts, location, your microphone, and much more making this application ‘spyware’ and a HUGE privacy concern. It is strongly advised to uninstall this program if you can. To do this, navigate to 'Settings’ > 'Apps’, then delete the application.”

    • kattfisk@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      3
      ·
      10 days ago

      To quote the most salient post

      The app doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.

      Which is a sorely needed feature to tackle problems like SMS scams

      • desktop_user@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        10 days ago

        if the cellular carriers were forced to verify that caller-ID (or SMS equivalent) was accurate SMS scams would disappear (or at least be weaker). Google shouldn’t have to do the job of the carriers, and if they wanted to implement this anyway they should let the user choose what service they want to perform the task similar to how they let the user choose which “Android system WebView” should be used.

        • Aermis@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 days ago

          Carriers don’t care. They are selling you data. They don’t care how it’s used. Google is selling you a phone. Apple held down the market for a long time for being the phone that has some of the best security. As an android user that makes me want to switch phones. Not carriers.

        • kattfisk@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          No, that wouldn’t make much difference. I don’t think I’ve seen a real world attack via SMS that even bothered to “forge” the from-field. People are used to getting texts from unknown numbers.

          And how would you possibly implement this supposed “caller-id” for a field that doesn’t even have to be set to a number?

          • desktop_user@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            8 days ago

            caller id is the thing that tells you the number. it isn’t cheap to forge, but it’s the only way a scan could reasonably effect anyone with more than half a brain. there is never a reason to send information to an unknown SMS number, or click on a link from a text message from an unknown number.

      • throwback3090@lemmy.nz
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        7
        ·
        10 days ago

        Why do you need machine learning for detecting scams?

        Is someone in 2025 trying to help you out of the goodness of their heart? No. Move on.

        • Aermis@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          10 days ago

          If you want to talk money then it is in businesses best interest that money from their users is being used on their products, not being scammed through the use of their products.

          Secondly machine learning or algorithms can detect patterns in ways a human can’t. In some circles I’ve read that the programmers themselves can’t decipher in the code how the end result is spat out, just that the inputs will guide it. Besides the fact that scammers can circumvent any carefully laid down antispam, antiscam, anti-virus through traditional software, a learning algorithm will be magnitudes harder to bypass. Or easier. Depends on the algorithm

          • throwback3090@lemmy.nz
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            10 days ago

            I don’t know the point of the first paragraph…scams are bad? Yes? Does anyone not agree? (I guess scammers)

            For the second we are talking in the wild abstract, so I feel comfortable pointing out that every automated system humanity has come up with so far has pulled in our own biases and since ai models are trained by us, this should be no different. Second, if the models are fallible, you cannot talk about success without talking false positives. I don’t care if it blocks every scammer out there if it also blocks a message from my doctor. Until we have data on consensus between these new algorithms and desired outcomes, it’s pointless to claim they are better at X.

        • kattfisk@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 days ago

          Blaming the victim solves nothing.

          Scamming is a rapidly growing industry that is becoming more professional and specialized all the time. Anyone can be scammed.

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        10 days ago

        You don’t need advanced scanning technology running on every device with access to every single bit of data you ever seen to detect scam. You need telco operator to stop forwarding forged messages headers and… that’s it. Cheap, efficient, zero risk related to invasion of privacy through a piece of software you did not need but was put there “for your own good”.

        • zlatko@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 days ago

          I will perhaps be nitpicking, but… not exactly, not always. People get their shit hacked all the time due to poor practices. And then those hacked things can send emails and texts and other spam all they want, and it’ll not be forged headers, so you still need spam filtering.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      So is this really just a local AI model? Or is it something bigger? My S25 Ultra has the app but it hasn’t used any battery or data.

      • Auli@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        10 days ago

        I mean the grapheneos devs say it is. Are they going to lie.

        • throwback3090@lemmy.nz
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          10 days ago

          Yes, absolutely, and regularly, and without shame.

          But not usually about technical stuff.

    • throwback3090@lemmy.nz
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      9
      ·
      edit-2
      10 days ago

      graphene folks have a real love for the word misinformation (and FUD, and brigading). That’s not you under there👻, Daniel, is it?

      After 5 years of his antics hateful bullshit lies, I think I can genuinely say that word triggers me.

      • teohhanhui@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        10 days ago

        Please, read the links. They are the security and privacy experts when it comes to Android. That’s their explanation of what this Android System SafetyCore actually is.

      • loics2@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 days ago

        Have you even read the article you posted? It mentions these posts by GrapheneOS

    • moncharleskey@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 days ago

      I struggle with GitHub sometimes. It says to download the apk but I don’t see it in the file list. Anyone care to point me in the right direction?

    • Druid@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      Amazing, thank you. I have uninstalled this bs twice now and have so far been spared by another force install. I hope this works

    • K4mpfie@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      And what exactly does the github App do?

      Is suppose it’s not the same as the Google App?

      • ziggurat@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        It doesn’t do anything. The only reason to consider installing it is that this is cryptographically signed by another developer, so if Google tries to install safety core again, it will fail because googled signature is different. It also has a super high version number, so that Google hopefully will not think to try to install the software.

  • AWittyUsername@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    10 days ago

    Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content

    Cheers Google but I’m a capable adult, and able to do this myself.

  • mctoasterson@reddthat.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    10 days ago

    People don’t seem to understand the risks presented by normalizing client-side scanning on closed source devices. Think about how image recognition works. It scans image content locally and matches to keywords or tags, describing the person, objects, emotions, and other characteristics. Even the rudimentary open-source model on an immich deployment on a Raspberry Pi can process thousands of images and make all the contents searchable with alarming speed and accuracy.

    So once similar image analysis is done on a phone locally, and pre-encryption, it is trivial for Apple or Google to use that for whatever purposes their use terms allow. Forget the iCloud encryption backdoor. The big tech players can already scan content on your device pre-encryption.

    And just because someone does a traffic analysis of the process itself (safety core or mediaanalysisd or whatever) and shows it doesn’t directly phone home, doesn’t mean it is safe. The entire OS is closed source, and it needs only to backchannel small amounts of data in order to fuck you over.

    Remember the original justification for clientside scanning from Apple was “detecting CSAM”. Well they backed away from that line of thinking but they kept all the client side scanning in iOS and Mac OS. It would be trivial for them to flag many other types of content and furnish that data to governments or third parties.

    • ad_on_is@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 days ago

      if there was something that could run android apps virtualized, I’d switch in a heartbeat

      • Refurbished Refurbisher@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 days ago

        There are two solutions for that. One is Waydroid, which is basically what you’re describing. Another is android_translation_layer, which is closer to WINE in that it translates API calls to more native Linux ones, although that project is still in the alpha stages.

        You can try both on desktop Linux if you’d like. Just don’t expect to run apps that require passing SafetyNet, like many banking apps.

        • ad_on_is@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 days ago

          I know about WayDroid, but never heard of ATL.

          So yeah, while we have the fundamentals, we still don’t have an OS that’s stable enough as a daily driver on phones.

          And this isn’t a Linux issue. It’s mostly because of proprietary drivers. GrapheneOS already has the issue that it only works on Pixel phones.

          I can imagine, bringing a Linux only mobile OS to life is even harder. I wish android phones were designed in a way, that there is a driver layer and an OS layer, with standerdized APIs to simply swap the OS layer for any unix-like system.

          • Refurbished Refurbisher@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 days ago

            Halium is basically what you’re talking about. It uses the Android HAL to run Linux.

            The thing is, that also uses the Android kernel, meaning that there will essentially never be a kernel update since the kernel patches by Qualcomm have a ton of technical debt. The people working on porting mainline Linux to SoCs are essentially rewriting everything from scratch.

      • bdonvr@thelemmy.club
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        Every one of them can, AFAIK. I have a second cheap used phone I picked up to play with Ubuntu Touch and it has a system called Waydroid for this. Not quite seamless and you’ll want to use native when possible but it does work.

        SailfishOS, PostmarketOS, Mobian, etc all also can use Waydroid or a similar thing

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      11 days ago

      The Firefox Phone should’ve been a real contender. I just want a browser in my pocket that takes good pictures and plays podcasts.

      • StefanT@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 days ago

        Unfortunately Mozilla is going the enshittification route more and more. Or good in this case that the Firefox Phone did not take of.

      • Ledericas@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 days ago

        too bad firefox is going through the way like google, they are updating thier privacy terms of usage.

        • ilinamorato@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 days ago

          Yep. I’m furious at Mozilla right now. But when the Firefox Phone was in development, they were one of the web’s heroes.

          • Ledericas@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 days ago

            it says its only for LLM? as long as they dont try to expand the “privacy” in any case i download alternatives to the browsers anyways.

    • DegenerateSupreme@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 days ago

      I just gave up and pre-ordered the Light Phone 3. Anytime I truly need a mobile app, I can just use an old iPhone and a WiFi connection.

  • DigitalDilemma@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    3
    ·
    10 days ago

    More information: It’s been rolling out to Android 9+ users since November 2024 as a high priority update. Some users are reporting it installs when on battery and off wifi, unlike most apps.

    App description on Play store: SafetyCore is a Google system service for Android 9+ devices. It provides the underlying technology for features like the upcoming Sensitive Content Warnings feature in Google Messages that helps users protect themselves when receiving potentially unwanted content. While SafetyCore started rolling out last year, the Sensitive Content Warnings feature in Google Messages is a separate, optional feature and will begin its gradual rollout in 2025. The processing for the Sensitive Content Warnings feature is done on-device and all of the images or specific results and warnings are private to the user.

    Description by google Sensitive Content Warnings is an optional feature that blurs images that may contain nudity before viewing, and then prompts with a “speed bump” that contains help-finding resources and options, including to view the content. When the feature is enabled, and an image that may contain nudity is about to be sent or forwarded, it also provides a speed bump to remind users of the risks of sending nude imagery and preventing accidental shares. - https://9to5google.com/android-safetycore-app-what-is-it/

    So looks like something that sends pictures from your messages (at least initially) to Google for an AI to check whether they’re “sensitive”. The app is 44mb, so too small to contain a useful ai and I don’t think this could happen on-phone, so it must require sending your on-phone data to Google?

  • SavageCoconut@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    11 days ago

    Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users control SafetyCore, and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature.”

    GrapheneOS — an Android security developer — provides some comfort, that SafetyCore “doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.”

    But GrapheneOS also points out that “it’s unfortunate that it’s not open source and released as part of the Android Open Source Project and the models also aren’t open let alone open source… We’d have no problem with having local neural network features for users, but they’d have to be open source.” Which gets to transparency again.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      Graphene could easily allow for open source solutions to emulate the SafetyCore interface. Like how it handles Google’s location services.

      There’s plenty of open source libraries and models for running local AI, seems like this is something that could be easily replicated in the FOSS world.

    • hector@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 days ago

      Thanks for the link, this is impressive because this really has all the trait of spyware; apparently it installs without asking for permission ?

      • Moose@moose.best
        link
        fedilink
        English
        arrow-up
        6
        ·
        11 days ago

        Yup, heard about it a week or two ago. Found it installed on my Samsung phone, it never asked for permissions or gave any info that it was added to my phone.

      • Ledericas@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        yea i found it as soon as this article said it was on your phone spying on you, ALSO many people, like myself noticed the battery draining pretty fast too, this is probalby the cause, if it installs without your knowledge, i doubt the app is excluded from your "app battery usage logs to, like it doesnt show up how much power its using.

    • Raiderkev@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      Thanks. Uninstalled. Not that it matters, they already got what they wanted from me most likely.

    • x4740N@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      Apparently I’m a beta tester for it, don’t recall signing up for beta tests with it

    • lka1988@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      Thanks. Uninstalled and reported. Hopefully they’ll get the hint. I love my Android, but this is pushing me towards Graphene/Calyx.

  • perestroika@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 days ago

    The countdown to Android’s slow and painful death is already ticking for a while.

    It has become over-engineered and no longer appealing from a developer’s viewpoint.

    I still write code for Android because my customers need it - will be needing for a while - but I’ve stopped writng code for Apple’s i-things and I research alternatives for Android. Rolling my own environment with FOSS components on top of Raspbian looks feasible already. On robots and automation, I already use it.

      • perestroika@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 days ago

        In my experience, the API has iteratively made it ever harder for applications to automatically perform previously easy jobs, and jobs which are trivial under ordinary Linux (e.g. become an access point, set the SSID, set the IP address, set the PSK, start a VPN connection, go into monitor / inject mode, access an USB device, write files to a directory of your choice, install an APK). Now there’s a literal thicket of API calls and declarations to make, before you can do some of these things (and some are forever gone).

        The obvious reason is that Google tries to protect a billion inexperienced people from scammers and malware.

        But it kills the ability to do non-standard things, and the concept of your device being your own.

        And a big problem is that so many apps rely on advertising for its income stream. Spying a little has been legitimized and turned into a business under Android. To maintain control, the operating system then has to be restrictive of apps. Which pisses off developers who have a trusting relationship with their customer and want their apps to have freedom to operate.

        • throwback3090@lemmy.nz
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 days ago

          I suppose that’s all true, I’d say more “following apples lead on locking things down” than over engineered, but 🍅🍅.

          I find myself avoiding the whole root business, I do want my mobile device to be fairly locked down. But I also use alternative OSs and app stores to avoid 90% of the garbage (stuff I can’t avoid I put in work profile, like I still need google maps).

          It works for me, but on the front of this complexity driving away devs I don’t really see a viable alternative. Base Linux isn’t secure enough for what we put on these little computers. I mean you’ve still got tons of influential people arguing you shouldn’t use secureboot or a tpm as if leaving your whole computer unsecured is better than the indignity of using a non-free bios.

    • danciestlobster@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      10 days ago

      I also reported it as hostile and inappropriate. I am sure Google will do fuck all with that report but I enjoy being petty sometimes

  • variouslegumes@reddthat.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 days ago

    I switched over to GrapheneOS a couple months ago and couldn’t be happier. If you have a Pixel the switch is really easy. The biggest obstacle was exporting my contacts from my google account.

    • Kbobabob@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 days ago

      GrapheneOS — an Android security developer — provides some comfort, that SafetyCore “doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.”

  • Armand1@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 days ago

    For people who have not read the article:

    Forbes states that there is no indication that this app can or will “phone home”.

    Its stated use is for other apps to scan an image they have access to find out what kind of thing it is (known as "classification"). For example, to find out if the picture you’ve been sent is a dick-pick so the app can blur it.

    My understanding is that, if this is implemented correctly (a big ‘if’) this can be completely safe.

    Apps requesting classification could be limited to only classifying files that they already have access to. Remember that android has a concept of “scoped storage” nowadays that let you restrict folder access. If this is the case, well it’s no less safe than not having SafetyCore at all. It just saves you space as companies like Signal, WhatsApp etc. no longer need to train and ship their own machine learning models inside their apps, as it becomes a common library / API any app can use.

    It could, of course, if implemented incorrectly, allow apps to snoop without asking for file access. I don’t know enough to say.

    Besides, you think that Google isn’t already scanning for things like CSAM? It’s been confirmed to be done on platforms like Google Photos well before SafetyCore was introduced, though I’ve not seen anything about it being done on devices yet (correct me if I’m wrong).

    • Ulrich@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 days ago

      Forbes states that there is no indication that this app can or will “phone home”.

      That doesn’t mean that it doesn’t. If it were open source, we could verify it. As is, it should not be trusted.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      Issue is, a certain cult (christian dominionists), with the help of many billionaires (including Muskrat) have installed a fucking dictator in the USA, who are doing their vow to “save every soul on Earth from hell”. If you get a porn ban, it’ll phone not only home, but directly to the FBI’s new “moral police” unit.

    • lepinkainen@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 days ago

      This is EXACTLY what Apple tried to do with their on-device CSAM detection, it had a ridiculous amount of safeties to protect people’s privacy and still it got shouted down

      I’m interested in seeing what happens when Holy Google, for which most nerds have a blind spot, does the exact same thing

      EDIT: from looking at the downvotes, it really seems that Google can do no wrong 😆 And Apple is always the bad guy in lemmy

      • lka1988@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        11 days ago

        I have 5 kids. I’m almost certain my photo library of 15 years has a few completely innocent pictures where a naked infant/toddler might be present. I do not have the time to search 10,000+ pics for material that could be taken completely out of context and reported to authorities without my knowledge. Plus, I have quite a few “intimate” photos of my wife in there as well.

        I refuse to consent to a corporation searching through my device on the basis of “well just in case”, as the ramifications of false positives can absolutely destroy someone’s life. The unfortunate truth is that “for your security” is a farce, and people who are actually stupid enough to intentionally create that kind of material are gonna find ways to do it regardless of what the law says.

        Scanning everyone’s devices is a gross overreach and, given the way I’ve seen Google and other large corporations handle reports of actually-offensive material (i.e. they do fuck-all), I have serious doubts over the effectiveness of this program.

      • Noxy@pawb.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        it had a ridiculous amount of safeties to protect people’s privacy

        The hell it did, that shit was gonna snitch on its users to law enforcement.

        • lepinkainen@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          3
          ·
          10 days ago

          Nope.

          A human checker would get a reduced quality copy after multiple CSAM matches. No police was to be called if the human checker didn’t verify a positive match

          Your idea of flooding someone with fake matches that are actually cat pics wouldn’t have worked

      • Natanael@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 days ago

        Apple had it report suspected matches, rather than warning locally

        It got canceled because the fuzzy hashing algorithms turned out to be so insecure it’s unfixable (easy to plant false positives)

        • lepinkainen@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 days ago

          They were not “suspected” they had to be matches to actual CSAM.

          And after that a reduced quality copy was shown to an actual human, not an AI like in Googles case.

          So the false positive would slightly inconvenience a human checker for 15 seconds, not get you Swatted or your account closed

          • Natanael@infosec.pub
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            10 days ago

            Yeah so here’s the next problem - downscaling attacks exists against those algorithms too.

            https://scaling-attacks.net/

            Also, even if those attacks were prevented they’re still going to look through basically your whole album if you trigger the alert

            • lepinkainen@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              10 days ago

              And you’ll again inconvenience a human slightly as they look at a pixelated copy of a picture of a cat or some noise.

              No cops are called, no accounts closed

              • Natanael@infosec.pub
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 days ago

                The scaling attack specifically can make a photo sent to you look innocent to you and malicious to the reviewer, see the link above

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      11 days ago

      Doing the scanning on-device doesn’t mean that the findings cannot be reported further. I don’t want others going thru my private stuff without asking - not even machine learning.