Known limitations
Unicode passwords
Kryptor uses UTF-8 to convert password strings/characters to bytes. Unfortunately, Unicode normalization is not applied, meaning the same Unicode character can get encoded in different ways depending on the operating system, keyboard, etc. This can cause issues decrypting files when non-ASCII characters are used in a password.
Sadly, there's no pretty solution to this problem. Even if you use Unicode normalization, you can still encounter this issue, so it's only a partial fix that annoyingly duplicates the password in memory. Confusingly, there are also multiple forms of normalization, with different organisations not agreeing on which one developers should use.
To eliminate the problem, you have to enforce ASCII characters, which some people are against since it's restrictive and reduces password entropy for the same number of characters. Furthermore, organisations like NIST, the IETF, and OWASP recommend supporting Unicode characters. However, people should be randomly generating passwords/passphrases using a password manager or Diceware (even if you're going to memorise them), which results in ASCII passwords unless you go out of your way to use Unicode characters (e.g. a non-English wordlist). Additionally, some websites don't support Unicode passwords, and websites that do support Unicode passwords have password reset functionality in case there's an issue, which isn't possible with an offline application.
Multi-recipient sender authentication
Kryptor currently only has sender authentication with a single recipient or if all recipients are honest. When sending a file to multiple recipients, if any are malicious or have had their private key compromised, they can create a new file that looks like it came from the original sender.
This is possible because all recipients have access to the same file key (to avoid encrypting the file repeatedly, which is slow and would require sending multiple files) and the wrapped keys aren't tied to that specific encrypted file payload (they can be decrypted independently of the payload).
Protecting against this generally requires processing the entire file in one go rather than in chunks, either using a signature scheme (outsider non-repudiation) or a MAC (insider non-repudiation). This is something that was actively avoided due to the benefits of chunking. However, another approach would be to use a session-supporting AEAD, which allows intermediate tags. The final tag can then be used to authenticate the entire sequence of chunks in one pass. This is possible with the duplex construction and deck functions, but as far as I know, intermediate tags have seen little adoption in practice.
20 recipients
An indistinguishable file format means more complicated parsing, and you don't want to leak the number of recipients. As such, I decided to use a fixed header format like before. This is simple but less efficient, adds constant storage overhead regardless of how many recipients you have, and limits the number of recipients to 20. However, this was the original limit with age and should rarely be a problem.
File metadata
Whilst encrypted files are intended to be indistinguishable from random data, if an attacker knows that the same file has been encrypted many times, they may be able to determine the unpadded file length.
The headers are also fixed in size, so there's a range of small file sizes that Kryptor doesn't produce. Thus, files of the minimum length could be seen as an indicator that Kryptor was used. However, dummy random files could be stored to address this type of problem.
Finally, the timestamps on encrypted files are currently untouched. This may change in the future, perhaps only when file name encryption is specified, but proper timestomping is more complicated than simply changing the standard timestamps.
Post-quantum security
The asymmetric algorithms in Kryptor aren't post-quantum secure. However, a pre-shared key can be specified when encrypting with your private key to add post-quantum security.
Why a pre-shared key instead of post-quantum algorithms?
Half the post-quantum schemes probably/definitely aren't secure, so it would be unwise to completely switch. By contrast, pre-shared keys are secure if stored/sent/erased properly.
Few protocols use post-quantum secure asymmetric algorithms, mostly experimental stuff. It won't become the norm for a long time.
Few cryptographic libraries support such algorithms, so you'd need extra dependencies and probably a custom .NET wrapper around a C library.
A hybrid solution would significantly complicate key pair generation and public key sharing. In contrast, pre-shared keys add minimal complexity; the main issue is sending them securely if encrypting a file to someone.
The NIST post-quantum standardisation effort isn't over yet. More research is required for proper adoption.
Hardware support
ChaCha20-Poly1305 is not as fast as algorithms like AEGIS and Rocca-S with hardware support. However, it's still fast, doesn't require hardware support, is widely used, and the cryptography is unlikely to be a performance bottleneck compared to disk IO. With that said, ChaCha20-Poly1305 will likely be replaced in the future.
Compromised machine
If an attacker has physical or remote access to your machine, they could retrieve sensitive data (e.g. encryption keys) whilst Kryptor is running. This is quite literally impossible to prevent.
However, Kryptor does attempt to zero out sensitive data as soon as possible from memory. With pinning, this should be guaranteed, but sometimes pinning can't be used. For example, non-interactive string inputs (e.g. pre-shared keys) can't be erased from memory and will unfortunately get leaked into the process table/shell history.
Ed25519 for digital signatures can also be susceptible to fault attacks when an attacker has physical or remote access to the machine. As this is generally only a concern for embedded devices and most mitigations are slow and ineffective, this type of attack is typically not protected against.
Last updated