-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: user storage small encryption changes. #24623
refactor: user storage small encryption changes. #24623
Conversation
CLA Signature Action: All authors have signed the CLA. You may need to manually re-run the blocking PR check if it doesn't pass in a few minutes. |
#SCRYPT_N: number = 2 ** 17; // CPU/memory cost parameter (must be a power of 2, > 1) | ||
|
||
#SCRYPT_r: number = 8; // Block size parameter | ||
|
||
#PBKDF2_ITERATIONS: number = 900_000; | ||
#SCRYPT_p: number = 1; // Parallelization parameter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using OWASP configuration
This is flexible (and can be increased post launch easily) if the "work" lever needs to be increased.
Builds ready [460def7]
Page Load Metrics (448 ± 374 ms)
Bundle size diffs [🚨 Warning! Bundle size has increased!]
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #24623 +/- ##
===========================================
- Coverage 67.42% 67.39% -0.03%
===========================================
Files 1289 1292 +3
Lines 50241 50291 +50
Branches 13014 13023 +9
===========================================
+ Hits 33871 33889 +18
- Misses 16370 16402 +32 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Super good job man!
#PBKDF2_SALT_SIZE: number = 16; // 16 bytes | ||
#SCRYPT_SALT_SIZE: number = 16; // 16 bytes | ||
|
||
#SCRYPT_N: number = 2 ** 17; // CPU/memory cost parameter (must be a power of 2, > 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the recommended value from OWASP. Another recommendation I saw elsewhere was 2^22 but I'm afraid that would be too big of a memory impact.
(should) Please do test the memory impact of current value (and maybe 22 too)
2^17 was documented to be 128MB of memory pressure, but with a JS implementation it might as well be 4x that if it's not implemented on raw byte arrays all the way.
Anyway, we want to know what the memory spike of this is.
I recommend "Allocation instrumentation on timeline" option in chromium devtools -> memory as the first thing to look at.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arbitrary JS testing on performance.
2**17
= ~1.5s
2**18
= ~1.8s
2**19
= ~2.4s
2**20
= ~4.4s
2**21
= ~7.5s
2**22
= ~13.8s
I'll also test memory performance, but >2s is a little bad for us (even if it is a 1 time cost).
If we want to support larger N values, we'll need to do a bigger refactor and keep a persisted cached key and create this key in the background.
To keep unblocked, is it possible to keep OWASP configuration and up this in a future PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Memory timeframes show the allocations, but not to the max mem described in @noble readme.
key: cachedEntry[1], | ||
}; | ||
}; | ||
const setCachedKey = (salt: Uint8Array, key: Uint8Array): void => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case anyone else is wondering how more than one key would end up here - note that memory cache can be lost.
If you encrypt something, send it, restart the browser, encrypt something again and then fetch previously encrypted value - you have 2 keys.
Could there be more? Yes. mobile app and portfolio could do he same on various features' user data, so the maximum number of active keys is 1+number of features that can be stored.
As a result, if you keep your app alive with this implementation for a long period of time while other instances keep resetting, it could accumulate a long list of keys. Restarting fixes it tho.
(could) To avoid this rare memory leak scenario we might consider adding an if to setCachedKey to check if inMemCachedKDF.size is above a reasonable maximum (20? 100?) and clear cache if so to prevent it growing indefinitely.
The likelihood of that ever happening feels low tho.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep very low.
Good idea and set a limit of 10 cached keys. We can change this in the future if we ever need more.
part of requested changes
o: EncryptedPayload['o'], | ||
salt?: Uint8Array, | ||
) { | ||
const cachedKey = salt ? getCachedKeyBySalt(salt) : getAnyCachedKey(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Prithpal-Sooriya, why is getAnyCachedKey()
needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's needed because the encryption only needs one key, whichever it is; in normal circumstances, there's only one - most recently created. Older payloads or ones coming from other instances are encrypted with different locally derived keys with a different salt, so they need to be derived here as well for decryption to succeed. In case the same key/salt is used for another thing to decrypt from the other instance, it's also cached. For encrypting it doesn't really matter much which of the keys is used, so the first one we can get from the cache is fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clarification. We could be concerned about the function correctness since it disregards the provided password in case there is already a key in the cache. It could be problematic if a valid key is used even if an invalid password is provided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good callout.
Currently in all cases we are using the same password.
I think we can change this to be a map to password + salt to receive the cached key :)
also fix linting issues
… NOTIFY-616-update-user-storage-encryption
Builds ready [7f6f5c7]
Page Load Metrics (707 ± 539 ms)
Bundle size diffs [🚨 Warning! Bundle size has increased!]
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GO GO GO
Description
NOTE this service is not live (yet), so we have time to refactor.
We found pbkdf2 to be pretty slow, and we wanted to change it up for something else. We wanted to use Argon2, however
@noble
has not audited this yet.scrypt
using OWASP configuration, seems to be faster thanpbkdf2
.We've also added some in-memory caching of the key by salt. We are fine if this cache gets cleared, and only incurs the initial key generation cost for encrypting or decrypting (around ~1 second).
Related issues
Fixes:
Manual testing steps
Screenshots/Recordings
Before
After
Pre-merge author checklist
Pre-merge reviewer checklist