|
Post by aaplsauce on Aug 18, 2021 22:32:59 GMT -8
|
|
Dave
Member
"It's tough to make predictions, especially about the future." Yogi Berra
Posts: 4,091
|
Post by Dave on Aug 19, 2021 1:49:24 GMT -8
|
|
Dave
Member
"It's tough to make predictions, especially about the future." Yogi Berra
Posts: 4,091
|
Post by Dave on Aug 19, 2021 2:29:26 GMT -8
|
|
|
Post by Lstream on Aug 19, 2021 4:53:38 GMT -8
I believe that Apple is well intentioned, but at the same time, have similar concerns expressed here. I don’t know how Apple is going to avoid the pressure from all kinds of governments with bad intent. For example, does Apple really want to run the risk of having the Chinese government pressure them for other purposes, by threatening their access to the Chinese market? This seems like such a basic flaw in the concept, that Apple must have considered it. But you wouldn’t know it based upon what we have seen from them so far. Saying “you can’t make us” seems naive.
|
|
Dave
Member
"It's tough to make predictions, especially about the future." Yogi Berra
Posts: 4,091
|
Post by Dave on Aug 19, 2021 6:13:17 GMT -8
I believe that Apple is well intentioned, but at the same time, have similar concerns expressed here. I don’t know how Apple is going to avoid the pressure from all kinds of governments with bad intent. For example, does Apple really want to run the risk of having the Chinese government pressure them for other purposes, by threatening their access to the Chinese market? This seems like such a basic flaw in the concept, that Apple must have considered it. But you wouldn’t know it based upon what we have seen from them so far. Saying “you can’t make us” seems naive. Yes, once the proverbial genie gets out of the bottle it's impossible to get it back in. He has already voiced that Apple has the ability and the willingness to scan its customers iCloud accounts for one item, then why not many items. The genie may already be out.
|
|
4aapl
Moderator
Posts: 3,622
|
Post by 4aapl on Aug 19, 2021 6:34:29 GMT -8
I believe that Apple is well intentioned, but at the same time, have similar concerns expressed here. I don’t know how Apple is going to avoid the pressure from all kinds of governments with bad intent. For example, does Apple really want to run the risk of having the Chinese government pressure them for other purposes, by threatening their access to the Chinese market? This seems like such a basic flaw in the concept, that Apple must have considered it. But you wouldn’t know it based upon what we have seen from them so far. Saying “you can’t make us” seems naive. I thought the posting the other day went into more details on this, that the red marked images had to be approved by multiple governments. That still doesn't put things 100% on solid grounds, since often there are sides or coalitions, but it does help prevent the lone wolf type government from looking for a certain pic. Apple is trying to do the right thing, and it's easy to condemn most of this area. It may have the power to make a dent, or it might just push certain people to a different platform. I wonder about the edge cases, like a kid looking at or taking kid photos, with or without an adult technically owning the phone. If condemning someone for life as a child predator, I just want to see it on the ones where it should, and not the edge cases. It's a tough area, but Apple tries to push forward progression at times. It's just normally with dropping a CD drive, instead of with big issues like this. Though their security and privacy improvements get them into problems like this, where in other ways they are protecting certain people that they would rather not. I trust Apple and it's management to do the right thing, embodying the "Don't be evil" that another company claimed to follow. Sometimes that makes for tough decisions. But having been through what should have been a much less controversial decision, of lowering the speed of old iPhones with old batteries so they didn't just shut off, we see that part of the big problem on these things is needing to being overly upfront about it. If Apple had given the user the choice after running the SU, or maybe put up a reminder message every 2-4 weeks, it should have sidestepped most of the issue. In this case that probably means giving plenty of time for most offenders to leave the platform, which while good for the platform doesn't help fixing the underlying issue. At the same time, that runs the offenders away, even if the feature doesn't end up being implemented.
|
|
4aapl
Moderator
Posts: 3,622
|
Post by 4aapl on Aug 19, 2021 6:42:26 GMT -8
I believe that Apple is well intentioned, but at the same time, have similar concerns expressed here. I don’t know how Apple is going to avoid the pressure from all kinds of governments with bad intent. For example, does Apple really want to run the risk of having the Chinese government pressure them for other purposes, by threatening their access to the Chinese market? This seems like such a basic flaw in the concept, that Apple must have considered it. But you wouldn’t know it based upon what we have seen from them so far. Saying “you can’t make us” seems naive. Yes, once the proverbial genie gets out of the bottle it's impossible to get it back in. He has already voiced that Apple has the ability and the willingness to scan its customers iCloud accounts for one item, then why not many items. The genie may already be out. Going a different direction on this, Apple has the ability to distribute OS and application software updates. Many to most people have these set to run automatically, but they are the ones that distribute the updates, so even just letting something get through into an update would be bad, whether it was auto installed or the user started the update process. A rouge app or OS would be just as bad or much worse, potentially having any tracking, recording, or monitoring options. But just because Apple has this opportunity, to do bad things or let others do bad things, doesn't make it something that we fear everyday. The ability is there, and the likes of China have the same power over it, where they could demand something and Apple would have to say no.
|
|
|
Post by Lstream on Aug 19, 2021 7:27:41 GMT -8
Here is Apple’s latest document which attempts to address the issues being raised. I don’t have the technical ability to ascertain how all of Apple’s protections could be defeated. But at the heart of their approach is that it does not generically look at images, and make decisions based on that. It is running an algorithm to see if the user’s phone contains images that are provided by at least two child safety organizations. Without actually looking at those images. So the system “as-is”, can’t do stuff like search for images that rogue governments disapprove of. At least that is my understanding. With that said, why are 90 organizations lining up against this? What have they seen that I (or apparently Apple) don’t? This isn’t like other kinds of Apple attacks that have a profit or competitive motive behind them that are advanced by trashing Apple. Are they knee-jerk over-reacting or do they have legitimate concerns?
|
|
chinacat
Moderator
AAPL Long since 2006
Posts: 4,426
|
Post by chinacat on Aug 19, 2021 7:27:46 GMT -8
PED has Apple’s NeuralHash blues: The Congressional angle. “In late 2019, after reports in The New York Times about the proliferation of child sexual abuse images online, members of Congress told Apple that it had better do more to help law enforcement officials or they would force the company to do so.” Caught between a rock (Congress) and a hard place (user privacy).
|
|
mark
fire starter
Posts: 1,552
|
Post by mark on Aug 19, 2021 8:49:03 GMT -8
Here is Apple’s latest document which attempts to address the issues being raised. I don’t have the technical ability to ascertain how all of Apple’s protections could be defeated. But at the heart of their approach is that it does not generically look at images, and make decisions based on that. It is running an algorithm to see if the user’s phone contains images that are provided by at least two child safety organizations. Without actually looking at those images. So the system “as-is”, can’t do stuff like search for images that rogue governments disapprove of. At least that is my understanding. With that said, why are 90 organizations lining up against this? What have they seen that I (or apparently Apple) don’t? This isn’t like other kinds of Apple attacks that have a profit or competitive motive behind them that are advanced by trashing Apple. Are they knee-jerk over-reacting or do they have legitimate concerns? Really? What if the North Korean Child Safety group and the China Safety for Children organization adds a photo that they want tagged for disapproval? That's 2 countries and 2 organizations.
|
|
|
Post by Lstream on Aug 19, 2021 9:12:22 GMT -8
Here is Apple’s latest document which attempts to address the issues being raised. I don’t have the technical ability to ascertain how all of Apple’s protections could be defeated. But at the heart of their approach is that it does not generically look at images, and make decisions based on that. It is running an algorithm to see if the user’s phone contains images that are provided by at least two child safety organizations. Without actually looking at those images. So the system “as-is”, can’t do stuff like search for images that rogue governments disapprove of. At least that is my understanding. With that said, why are 90 organizations lining up against this? What have they seen that I (or apparently Apple) don’t? This isn’t like other kinds of Apple attacks that have a profit or competitive motive behind them that are advanced by trashing Apple. Are they knee-jerk over-reacting or do they have legitimate concerns? Really? What if the North Korean Child Safety group and the China Safety for Children organization adds a photo that they want tagged for disapproval? That's 2 countries and 2 organizations. So far, this is for the US only and Apple will need to approve the addition of other countries. Point taken though. Same point would apply to Russia and their "independent" ally countries.
|
|
mark
fire starter
Posts: 1,552
|
Post by mark on Aug 19, 2021 9:36:00 GMT -8
Really? What if the North Korean Child Safety group and the China Safety for Children organization adds a photo that they want tagged for disapproval? That's 2 countries and 2 organizations. So far, this is for the US only and Apple will need to approve the addition of other countries. Point taken though. Same point would apply to Russia and their "independent" ally countries. There is plenty of really nasty stuff that can happen within the USA. Remember, the photo can never be seen itself, just the hash of it (certainly until an investigation happens after the fact of flagging a photo). For example, there's a heated election coming up and political party A is very much afraid of a particular potential candidate in political party B. So long before the election, and just before the primary in party B, groups that are friendly to party A submit a few photo hashs to the database of "forbidden" photos (they do this regularly as they uncover such photos). But in this case, the photo in question is one that is surely on the feared candidate's phone and cloud. Then someone leaks to the media that a "bad photo" was found in that candidate's photos, and neither Apple or the candidate can deny it because it did indeed flag a photo on the candidates device/cloud. The news cycle being what it is, blows it up rapidly, the candidate loses the primary, and then a week later when the photo is examined, it is innocuous ... and the organizations that submitted it claim "error".
|
|
|
Post by Lstream on Aug 19, 2021 10:07:18 GMT -8
So far, this is for the US only and Apple will need to approve the addition of other countries. Point taken though. Same point would apply to Russia and their "independent" ally countries. There is plenty of really nasty stuff that can happen within the USA. Remember, the photo can never be seen itself, just the hash of it (certainly until an investigation happens after the fact of flagging a photo). For example, there's a heated election coming up and political party A is very much afraid of a particular potential candidate in political party B. So long before the election, and just before the primary in party B, groups that are friendly to party A submit a few photo hashs to the database of "forbidden" photos (they do this regularly as they uncover such photos). But in this case, the photo in question is one that is surely on the feared candidate's phone and cloud. Then someone leaks to the media that a "bad photo" was found in that candidate's photos, and neither Apple or the candidate can deny it because it did indeed flag a photo on the candidates device/cloud. The news cycle being what it is, blows it up rapidly, the candidate loses the primary, and then a week later when the photo is examined, it is innocuous ... and the organizations that submitted it claim "error". Interesting discussion, but I am not seeing how this is a realistic threat. At least compared to what happens now. For this scenario to happen, I think the following steps need to happen. 1. Victim's phone or iCloud is hacked, and his photos are accessed 2. Those photos need to be sent to the two child protection agencies. I think this needs to be at least 30 photos 3. Don't those photos need to be offending content before they are added to the database and hashed? Isn't the scheme thwarted at this point if the content is innocuous? Per Apple's security doc " Apple generates the on-device perceptual CSAM hash database through an intersection of hashes provided by at least two child safety organizations operating in separate sovereign jurisdictions". I am not seeing how non-offending photos pass through this gate, meaning no threshold is passed. Meaning Apple has no basis to take any action. 4. So, on what basis can it be claimed that "forbidden" photos are on the victim's phone? Someone could claim that now if they felt like it. What is the difference?
|
|
mark
fire starter
Posts: 1,552
|
Post by mark on Aug 19, 2021 15:22:17 GMT -8
There is plenty of really nasty stuff that can happen within the USA. Remember, the photo can never be seen itself, just the hash of it (certainly until an investigation happens after the fact of flagging a photo). For example, there's a heated election coming up and political party A is very much afraid of a particular potential candidate in political party B. So long before the election, and just before the primary in party B, groups that are friendly to party A submit a few photo hashs to the database of "forbidden" photos (they do this regularly as they uncover such photos). But in this case, the photo in question is one that is surely on the feared candidate's phone and cloud. Then someone leaks to the media that a "bad photo" was found in that candidate's photos, and neither Apple or the candidate can deny it because it did indeed flag a photo on the candidates device/cloud. The news cycle being what it is, blows it up rapidly, the candidate loses the primary, and then a week later when the photo is examined, it is innocuous ... and the organizations that submitted it claim "error". Interesting discussion, but I am not seeing how this is a realistic threat. At least compared to what happens now. For this scenario to happen, I think the following steps need to happen. 1. Victim's phone or iCloud is hacked, and his photos are accessed 2. Those photos need to be sent to the two child protection agencies. I think this needs to be at least 30 photos 3. Don't those photos need to be offending content before they are added to the database and hashed? Isn't the scheme thwarted at this point if the content is innocuous? Per Apple's security doc " Apple generates the on-device perceptual CSAM hash database through an intersection of hashes provided by at least two child safety organizations operating in separate sovereign jurisdictions". I am not seeing how non-offending photos pass through this gate, meaning no threshold is passed. Meaning Apple has no basis to take any action. 4. So, on what basis can it be claimed that "forbidden" photos are on the victim's phone? Someone could claim that now if they felt like it. What is the difference? I think you missed my point. The phone doesn't have to be hacked, and the photos are standard photos that would surely be on the target's phone. It's trivially easy to get a photo into most people's phones ... the simplest way is to send them a friendly whatsapp message with a flattering photo of them in it, or a meme that flatters them, etc. As soon as they open it, it gets saved to their phone and 90+% of people have photo stream on icloud so it ends up on icloud almost immediately. Because all this happens close to the primary election, it won't be resolved in time for the PR damage to be undone.
|
|
|
Post by Lstream on Aug 19, 2021 16:48:31 GMT -8
Ya, I am missing your point and still don’t get it. I don’t see how this ends up getting CSAM identified photos onto the victims phone. 30 child porn pics that also happen to intersect in 2 child protection data bases.
It seems like you are trying to claim that this scheme works no matter what kind of pics are sent to the victims phone.
|
|
|
Post by hyci004 on Aug 19, 2021 19:29:33 GMT -8
Interesting discussion, but I am not seeing how this is a realistic threat. At least compared to what happens now. For this scenario to happen, I think the following steps need to happen. 1. Victim's phone or iCloud is hacked, and his photos are accessed 2. Those photos need to be sent to the two child protection agencies. I think this needs to be at least 30 photos 3. Don't those photos need to be offending content before they are added to the database and hashed? Isn't the scheme thwarted at this point if the content is innocuous? Per Apple's security doc " Apple generates the on-device perceptual CSAM hash database through an intersection of hashes provided by at least two child safety organizations operating in separate sovereign jurisdictions". I am not seeing how non-offending photos pass through this gate, meaning no threshold is passed. Meaning Apple has no basis to take any action. 4. So, on what basis can it be claimed that "forbidden" photos are on the victim's phone? Someone could claim that now if they felt like it. What is the difference? I think you missed my point. The phone doesn't have to be hacked, and the photos are standard photos that would surely be on the target's phone. It's trivially easy to get a photo into most people's phones ... the simplest way is to send them a friendly whatsapp message with a flattering photo of them in it, or a meme that flatters them, etc. As soon as they open it, it gets saved to their phone and 90+% of people have photo stream on icloud so it ends up on icloud almost immediately. Because all this happens close to the primary election, it won't be resolved in time for the PR damage to be undone. That’s not true. Opening a photo in WhatsApp does not save that photo to your photo library.
|
|
mark
fire starter
Posts: 1,552
|
Post by mark on Aug 20, 2021 5:36:40 GMT -8
I think you missed my point. The phone doesn't have to be hacked, and the photos are standard photos that would surely be on the target's phone. It's trivially easy to get a photo into most people's phones ... the simplest way is to send them a friendly whatsapp message with a flattering photo of them in it, or a meme that flatters them, etc. As soon as they open it, it gets saved to their phone and 90+% of people have photo stream on icloud so it ends up on icloud almost immediately. Because all this happens close to the primary election, it won't be resolved in time for the PR damage to be undone. That’s not true. Opening a photo in WhatsApp does not save that photo to your photo library. It is true!!! There is a setting that can be turned off, it is in Settings->Chats->Save to Camera Roll
|
|
|
Post by Lstream on Aug 20, 2021 6:21:45 GMT -8
True for that app, but not for WhatsApp or iMessage. I am a user of both. They DO NOT save to the camera roll. Kind of a side issue anyway to the main point of the discussion.
|
|
mark
fire starter
Posts: 1,552
|
Post by mark on Aug 20, 2021 14:45:53 GMT -8
True for that app, but not for WhatsApp or iMessage. I am a user of both. They DO NOT save to the camera roll. Kind of a side issue anyway to the main point of the discussion. The photo I attached *IS* the settings from Whatsapp! Maybe you are using an older version of the whatsapp app?
|
|