* AKPublic.Verify: Return an error if a provided PCR of the correct
digest was not included in the quote.
* AKPublic.VerifyAll: Implement VerifyAll method, which can cross-check
that provided PCRs were covered by quotes across PCR banks.
* PCR.QuoteVerified(): Introduce getter method to expose whether a
PCR value was covered during quote verification.
Event log verification is terrible and easy to mess up. Even if you
replay against the PCRs there are still values that can be tampered with
or reordered. PCRs also shouldn't be trusted unless they're attested to
have come from the correct TPM.
Given this, it seems advantageous to add some ability to consume raw
event logs, even if it's just for debugging.
* Export InvalidPCRs field in ReplayError
In order to retrieve the Invalid PCRs which couldn't be replayed against the Event log, we need this field to be exported as this gives the exact and true information. Replay error events will give all the events, but doesn't give the exact PCR index which doesn't get replayed.
Following is the test to extend PCR 7 and verify the PCRs 7,8,9 against the Event log. Output:
```
event log failed to verify: the following registers failed to replay: [7]
ReplayError Events:=[107]
Replay Error Events PCR indexes=[0 7 2 3 6 9 8 1 4 5]
```
* Add Comment to the exported field
Systems with TXT enabled may issue the TPM2_Startup() command from a
locality other than 0. In this case, the initial value of PCR0 will
represent the locality that the call was made from. This is exposed to
higher layers by an EV_NO_ACTION event that has data containing the
NULL-terminated string "StartupLocality" followed by a single byte
representing the state of the locality. As this event is EV_NO_ACTION,
it does not represent an extension in itself.
So:
1) Ignore events that are EV_NO_ACTION when replaying the log, except:
2) For PCR0, if an event is EV_NO_ACTION and contains the string
"StartupLocality", use the final byte of the event data as the initial
value of PCR0 for the replay.
Using the length of a digest to infer the hash algorithm is somewhat
fragile - if we end up with multiple hash algorithms that share the same
digest length, things will break. Instead, pass more complete digest
information through to relevant functions and figure things out by
mapping the TPM hash algorithm to the appropriate Golang type.
Crypto agile logs may contain digest types that we don't currently
handle. However, we still need to know how long each digest is in order
to read over the appropriate amount of the buffer. This information is
provided to us as part of the spec header - make use of it rather than
hardcoding the set of digests and lengths we know about.
@brandonweeks detected another case of the "make([]T, untrustedValue)"
pattern, which would allow an attacker to cause the parser to allocate
an unbounded amount of memory.
Fix this by reading one algorithm at a time instead of pre-allocating a
slice of algorithms.
This PR adds event log parsing logic. It's main goal is to require
validation at the same time as parsing, so structured events are always
verified against a quote. This new API replaces the exisitng "verifier"
package.
It's not a goal of this PR to parse the event data. This will be a
follow up, but since different users might want to parse different
events based on the OS, this API lets users of this package implement
custom event data parsing if they absolutely need to.