ah, not super intuitive. I see it now, thanks!
Fixed! Thank you 🙏 The Voyager app doesn’t give you comment previews so I didn’t catch it was broken.
I re-posted this somewhere else, but figure it makes sense to post here as its related to iMessage’s proclaimed “security” in comparison to Signal.
The text below is from a hackernews comment:
Gonna repeat myself since iMessage hasn’t improved one bit after four years. I also added some edits since attacks and Signal have improved.
iMessage has several problems:
a) been collecting messages in transit from the backbone, or
b) in cases where clients talk to server over forward secret connection, who has been collecting messages from the IM server
to retroactively decrypt all messages encrypted with the corresponding RSA private key. With iMessage the RSA key lasts practically forever, so one key can decrypt years worth of communication.
I’ve often heard people say “you’re wrong, iMessage uses unique per-message key and AES which is unbreakable!” Both of these are true, but the unique AES-key is delivered right next to the message, encrypted with the public RSA-key. It’s like transport of safe where the key to that safe sits in a glass box that’s strapped against the safe.
To compare these key sizes, we use https://www.keylength.com/en/2/
1280-bit RSA key has 79 bits of symmetric security. 829-bit RSA key has ~68 bits of symmetric security. So compared to what has publicly been broken, iMessage RSA key is only 11 bits, or, 2048 times stronger.
The same site estimates that in an optimistic scenario, intelligence agencies can only factor about 1507-bit RSA keys in 2024. The conservative (security-consious) estimate assumes they can break 1708-bit RSA keys at the moment.
(Sidenote: Even the optimistic scenario is very close to 1536-bit DH-keys OTR-plugin uses, you might want to switch to OMEMO/Signal protocol ASAP).
Under e.g. keylength.com, no recommendation suggest using anything less than 2048 bits for RSA or classical Diffie-Hellman. iMessage is badly, badly outdated in this respect.
This means that Alice who talks to Bob can be sure received messages came from Bob, because she knows it wasn’t her. But it also means she can’t show the message from Bob to a third party and prove Bob wrote it, because she also has the symmetric key that in addition to verifying the message, could have been used to sign it. So Bob can deny he wrote the message.
Now, this most likely does not mean anything in court, but that is no reason not to use best practices, always.
The digital signature algorithm is ECDSA, based on NIST P-256 curve, which according to https://safecurves.cr.yp.to/ is not cryptographically safe. Most notably, it is not fully rigid, but manipulable: “the coefficients of the curve have been generated by hashing the unexplained seed c49d3608 86e70493 6a6678e1 139d26b7 819f7e90”.
iMessage is proprietary: You can’t be sure it doesn’t contain a backdoor that allows retrieval of messages or private keys with some secret control packet from Apple server
iMessage allows undetectable man-in-the-middle attack. Even if we assume there is no backdoor that allows private key / plaintext retrieval from endpoint, it’s impossible to ensure the communication is secure. Yes, the private key never leaves the device, but if you encrypt the message with a wrong public key (that you by definition need to receive over the Internet), you might be encrypting messages to wrong party.
You can NOT verify this by e.g. sitting on a park bench with your buddy, and seeing that they receive the message seemingly immediately. It’s not like the attack requires that some NSA agent hears their eavesdropping phone 1 beep, and once they have read the message, they type it to eavesdropping phone 2 that then forwards the message to the recipient. The attack can be trivially automated, and is instantaneous.
So with iMessage the problem is, Apple chooses the public key for you. It sends it to your device and says: “Hey Alice, this is Bob’s public key. If you send a message encrypted with this public key, only Bob can read it. Pinky promise!”
Proper messaging applications use what are called public key fingerprints that allow you to verify off-band, that the messages your phone outputs, are end-to-end encrypted with the correct public key, i.e. the one that matches the private key of your buddy’s device.
EDIT: This has actually has some improvements made a month ago! Please see the discussion in replies.
When your buddy buys a new iDevice like laptop, they can use iMessage on that device. You won’t get a notification about this, but what happens on the background is, that new device of your buddy generates an RSA key pair, and sends the public part to Apple’s key management server. Apple will then forward the public key to your device, and when you send a message to that buddy, your device will first encrypt the message with the AES key, and it will then encrypt the AES key with public RSA key of each device of your buddy. The encrypted message and the encrypted AES-keys are then passed to Apple’s message server where they sit until the buddy fetches new messages for some device.
Like I said, you will never get a notification like “Hey Alice, looks like Bob has a brand new cool laptop, I’m adding the iMessage public keys for it so they can read iMessages you send them from that device too”.
This means that the government who issues a FISA court national security request (stronger form of NSL), or any attacker who hacks iMessage key management server, or any attacker that breaks the TLS-connection between you and the key management server, can send your device a packet that contains RSA-public key of the attacker, and claim that it belongs to some iDevice Bob has.
You could possibly detect this by asking Bob how many iDevices they have, and by stripping down TLS from iMessage and seeing how many encrypted AES-keys are being output. But it’s also possible Apple can remove keys from your device too to keep iMessage snappy: they can very possibly replace keys in your device. Even if they can’t do that, they can wait until your buddy buys a new iDevice, and only then perform the man-in-the-middle attack against that key.
To sum it up, like Matthew Green said[1]: “Fundamentally the mantra of iMessage is “keep it simple, stupid”. It’s not really designed to be an encryption system as much as it is a text message system that happens to include encryption.”
Apple has great security design in many parts of its ecosystem. However, iMessage is EXTREMELY bad design, and should not be used under any circumstances that require verifiable privacy.
In comparison, Signal
Uses Diffie Hellman + Kyber, not RSA
Uses Curve25519 that is a safe curve with 128-bits of symmetric security, not 79 bits like iMessage.
Uses Kyber key exchange for post quantum security
Uses MACs instead of digital signatures
Is not just free and open source software, but has reproducible builds so you can be sure your binary matches the source code
Features public key fingerprints (called safety numbers) that allows verification that there is no MITM attack taking place
Does not allow key insertion attacks under any circumstances: You always get a notification that the encryption key changed. If you’ve verified the safety numbers and marked the safety numbers “verified”, you won’t even be able to accidentally use the inserted key without manually approving the new keys.
So do yourself a favor and switch to Signal ASAP.
[1] https://blog.cryptographyengineering.com/2015/09/09/lets-tal…
The core functions required to send/receive iMessage messages are based on the open source project pypush
- not necessarily even forked from the same code. The app is as closed-source as it gets in that we’re given a black box and asked to trust it. Not that I’m telling people they should/shouldn’t trust them, just something to consider.
CLI’s are likely not specifically the target. I suspect the CLI is just the “low hanging fruit” and core set of software that needs to be supported before you build up to a fully functional GUI apps.
I’ve been hoping this project makes significant progress for the last few years to run GUI apps. unfortunately it’s been slow as there’s not as much interest in getting Mac apps to run on Linux as there is with WINE. that said, I don’t fault them, it’s a daunting task and wine has the benefit of three decades of progress under their belt.
for those not familiar, this basically lets you run command line tools. anything with a GUI will not work.
Posted this somewhere else but figured it may help others here. I can remove it if it’s considered spam.
Tangentially related, if you use iMessage, I’d recommend you switch to Signal.
text below from a hackernews comment:
Gonna repeat myself since iMessage hasn’t improved one bit after four years. I also added some edits since attacks and Signal have improved.
iMessage has several problems:
a) been collecting messages in transit from the backbone, or
b) in cases where clients talk to server over forward secret connection, who has been collecting messages from the IM server
to retroactively decrypt all messages encrypted with the corresponding RSA private key. With iMessage the RSA key lasts practically forever, so one key can decrypt years worth of communication.
I’ve often heard people say “you’re wrong, iMessage uses unique per-message key and AES which is unbreakable!” Both of these are true, but the unique AES-key is delivered right next to the message, encrypted with the public RSA-key. It’s like transport of safe where the key to that safe sits in a glass box that’s strapped against the safe.
To compare these key sizes, we use https://www.keylength.com/en/2/
1280-bit RSA key has 79 bits of symmetric security. 829-bit RSA key has ~68 bits of symmetric security. So compared to what has publicly been broken, iMessage RSA key is only 11 bits, or, 2048 times stronger.
The same site estimates that in an optimistic scenario, intelligence agencies can only factor about 1507-bit RSA keys in 2024. The conservative (security-consious) estimate assumes they can break 1708-bit RSA keys at the moment.
(Sidenote: Even the optimistic scenario is very close to 1536-bit DH-keys OTR-plugin uses, you might want to switch to OMEMO/Signal protocol ASAP).
Under e.g. keylength.com, no recommendation suggest using anything less than 2048 bits for RSA or classical Diffie-Hellman. iMessage is badly, badly outdated in this respect.
This means that Alice who talks to Bob can be sure received messages came from Bob, because she knows it wasn’t her. But it also means she can’t show the message from Bob to a third party and prove Bob wrote it, because she also has the symmetric key that in addition to verifying the message, could have been used to sign it. So Bob can deny he wrote the message.
Now, this most likely does not mean anything in court, but that is no reason not to use best practices, always.
The digital signature algorithm is ECDSA, based on NIST P-256 curve, which according to https://safecurves.cr.yp.to/ is not cryptographically safe. Most notably, it is not fully rigid, but manipulable: “the coefficients of the curve have been generated by hashing the unexplained seed c49d3608 86e70493 6a6678e1 139d26b7 819f7e90”.
iMessage is proprietary: You can’t be sure it doesn’t contain a backdoor that allows retrieval of messages or private keys with some secret control packet from Apple server
iMessage allows undetectable man-in-the-middle attack. Even if we assume there is no backdoor that allows private key / plaintext retrieval from endpoint, it’s impossible to ensure the communication is secure. Yes, the private key never leaves the device, but if you encrypt the message with a wrong public key (that you by definition need to receive over the Internet), you might be encrypting messages to wrong party.
You can NOT verify this by e.g. sitting on a park bench with your buddy, and seeing that they receive the message seemingly immediately. It’s not like the attack requires that some NSA agent hears their eavesdropping phone 1 beep, and once they have read the message, they type it to eavesdropping phone 2 that then forwards the message to the recipient. The attack can be trivially automated, and is instantaneous.
So with iMessage the problem is, Apple chooses the public key for you. It sends it to your device and says: “Hey Alice, this is Bob’s public key. If you send a message encrypted with this public key, only Bob can read it. Pinky promise!”
Proper messaging applications use what are called public key fingerprints that allow you to verify off-band, that the messages your phone outputs, are end-to-end encrypted with the correct public key, i.e. the one that matches the private key of your buddy’s device.
EDIT: This has actually has some improvements made a month ago! Please see the discussion in replies.
When your buddy buys a new iDevice like laptop, they can use iMessage on that device. You won’t get a notification about this, but what happens on the background is, that new device of your buddy generates an RSA key pair, and sends the public part to Apple’s key management server. Apple will then forward the public key to your device, and when you send a message to that buddy, your device will first encrypt the message with the AES key, and it will then encrypt the AES key with public RSA key of each device of your buddy. The encrypted message and the encrypted AES-keys are then passed to Apple’s message server where they sit until the buddy fetches new messages for some device.
Like I said, you will never get a notification like “Hey Alice, looks like Bob has a brand new cool laptop, I’m adding the iMessage public keys for it so they can read iMessages you send them from that device too”.
This means that the government who issues a FISA court national security request (stronger form of NSL), or any attacker who hacks iMessage key management server, or any attacker that breaks the TLS-connection between you and the key management server, can send your device a packet that contains RSA-public key of the attacker, and claim that it belongs to some iDevice Bob has.
You could possibly detect this by asking Bob how many iDevices they have, and by stripping down TLS from iMessage and seeing how many encrypted AES-keys are being output. But it’s also possible Apple can remove keys from your device too to keep iMessage snappy: they can very possibly replace keys in your device. Even if they can’t do that, they can wait until your buddy buys a new iDevice, and only then perform the man-in-the-middle attack against that key.
To sum it up, like Matthew Green said[1]: “Fundamentally the mantra of iMessage is “keep it simple, stupid”. It’s not really designed to be an encryption system as much as it is a text message system that happens to include encryption.”
Apple has great security design in many parts of its ecosystem. However, iMessage is EXTREMELY bad design, and should not be used under any circumstances that require verifiable privacy.
In comparison, Signal
Uses Diffie Hellman + Kyber, not RSA
Uses Curve25519 that is a safe curve with 128-bits of symmetric security, not 79 bits like iMessage.
Uses Kyber key exchange for post quantum security
Uses MACs instead of digital signatures
Is not just free and open source software, but has reproducible builds so you can be sure your binary matches the source code
Features public key fingerprints (called safety numbers) that allows verification that there is no MITM attack taking place
Does not allow key insertion attacks under any circumstances: You always get a notification that the encryption key changed. If you’ve verified the safety numbers and marked the safety numbers “verified”, you won’t even be able to accidentally use the inserted key without manually approving the new keys.
So do yourself a favor and switch to Signal ASAP.
[1] https://blog.cryptographyengineering.com/2015/09/09/lets-tal…
A non-issue really. Governments should be using their own apps/infra for private communication if only to ensure they’re in full control not just of their messaging but the infra.
I have 3 Google home products ( varying sizes) that sync music across the kitchen/living room, bathroom and bedroom. it makes fora. great listening experience.
have been able to do so with much smaller funding
It’s easy to “stand on the shoulders of giants” and claim some software is better when you’re adding 1-5% of additional work on top of a fully developed service/app/infrastructure. It’s why generally forks of software tend to have more features than the original source - See the following examples where people polish something and release it as their own improved creation:
Now, I’m not trying to say people should stop forking software, I’m all for it as it breeds competition and innovation, but to complain that a software project is not meeting your specific demands and their forks are doing so much more means you’re not understanding the other projects would probably die without all the hard work that goes on in the core product.
whereas even such basic shit implemented solely in Molly, such as app passwords that actually encrypt it’s database is pretty useful.
You say this but do you have any evidence to back up the claim that it’s useful and to who? Who’s asking for it? What percentage of Signal users would enable the feature? Is it 1%. Is that worth it? There’s barely a demand for privacy from the general populace otherwise Signal would be a hit and everyone would leave Whatsapp immediately, but it isn’t.
if you use most tiling compositor
You’re the 1% of the 1% when it comes to desktop configurations if you’re using a tiling window manager. I used one about 10 years ago and have yet to find one other person in the real world who has ever used one and I work in IT. Whether you like it or not, Signal developers are not going to spend any effort on making your very niche use case any better. I’m not saying that to be rude, but you have to be realistic. Your expectations are high for a free service that generally works for 99% of the population.
awesome! I obviously haven’t been keeping up. thanks!
Likely because while simplex looks great and is very promising, it doesn’t add much to the conversation here. Signal is primarily a replacement for SMS/MMS, this means people generally would want their contacts readily available and discoverable to minimize the friction of securely messaging friends/family. Additionally it’s dangerous to be recommending a service that hasn’t been audited nor proven itself secure over time.
link to report so we can track? thanks!
Not all, but some will and that’s good enough. Security and privacy is all about layers, not guaranteed solutions.
That said, if you have “business” with a company, they are probably using your registered home address to understand how to deal with your local laws/regulations. e.g. If you’re using a registered google account and don’t have an address in a state that offers protection, its very unlikely they’ll extend any privacy policies to you just because your IP says you’re in California, for example.
OTOH, if you don’t have a registered address/account/profile and your IP is coming out of California, its possible some companies will apply stricter policies based on your preference.
To your original point though, yes, shady companies will continue to behave in unethical ways.
I’m not saying it can’t be private, but defaults matter and by default every message sent on Telegram (unless you opt into a “secure chat”) is viewable by anyone with access to Telegrams infrastructure and you have no way to know your message history has been compromised.
In contrast, everything within Signal is completely private and end-to-end encrypted with no compromises. Your groups, group names, profile pictures, stickers, reaction, voice/video message etc are all private without anyone having to make do anything. Privacy is enforced, not an option.
Telegram does have secure chats, but - either intentionally or not - they have made them incredibly inconvenient to use as they are not enabled by default, don’t work in group chats, and don’t sync across your own devices.
So yes, Telegram is private, just as private as a PGP encrypted email.
Can you elaborate? It’s my understanding that push notifications are only used to trigger Signal to check if there are messages - the message data and who/what triggered it is not being sent to Google/Apple. If you don’t trust push notifications, you can always use a De-google’d phone and the Signal APK which will fallback to polling the server; this will obviously impact battery life as the app needs to constantly be checking for new messages.