You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do we need a chunked transfer API to support TPM CRB chunks?
Do we need some sort of JSON object advertising the needed buffer size for current profile (dependent on ML-KEM/DSA usage)?
Enable ML-DSA:
Enable new commands:
VerifySequenceComplete
SignSequenceComplete
VerifyDigestSignature
SignDigest
VerifySequenceStart
SignSequenceStart
Support marshalling of ML-DSA keys as part of OBJECT (?) needed for state and context
Write the state maintained during a sequence
Enable ML-KEM:
Enable new commands:
Encapsulate
Decapsulate
Support marshalling of ML-KEM keys as part of OBJECT (?) needed for state and context
Extend MAX_NV_INDEX_SIZE to allow ML-DSA signed EK and platform certificates to be written to one index. (ML-DSA-44: ~2757 bytes; ML-DSA-87: ~4964 bytes) OR wait for Falcon PQC with smaller signatures(?): tpm2: Increase MAX_NV_INDEX_SIZE (StateFormatLevel 9) #535
Re-add Camellia-192 test cases
Re-add SM4 support
All files must have SPDX-License-Identifier
Enable other commands:
NV_DefineSpace2
NV_ReadPublic2
Get rid of sizeof() related checks in tests/nvram_offsets.c that are not necessary anymore now that
OBJECTs are marshalled/unmarshalled instead of just copied into NVRAM
For v0.11:
#define MAX_CONTEXT_SIZE 2680 /* libtpms: changed for RSA-3072 */