0cef8e1edc
cryptsetup2 2.6.1 is a new release that supports reencryption of Q4.2 release LUKS2 volumes created at installation. This is a critical feature for the Qubes OS 4.2 release for added data at rest protection Cryptsetup 2.6.x internal changes: - Argon2 used externally and internally: requires a lot of RAM and CPU to derivate passphrase to key validated in key slots. - This is used to rate limit efficiently bruteforcing of LUKS key slots, requiring each offline brute force attempt to consume ~15-30 seconds per attempt - OF course, strong passphrases are still recommended, but bruteforcing LUKSv2 containers with Argon2 would require immense time, ram and CPU even to bruteforce low entropy passphrase/PINs. - passphrase change doesn't permit LUKS key slot specification anymore: key slot rotates (new one consusumed per op: then old one wiped internally. EG: LUKS key slot 1 created, then 0 deleted) - reencryption doesn't permit old call arguments. No more direct-io; inadmissively slow through AIO (async) calls, need workarounds for good enough perfs (arguments + newer kernel with cloudfare fixes in tree) cryptsetup 2.6.1 requires: - lvm2 2.03.23, which is also included in this PR. - requires libaio, which is also included in this PR (could be hacked out but deep dependency at first sight: left in) - requires util-linux 2.39 - patches for reproducible builds are included for above 3 packages. luks-functions was updated to support the new cryptsetup2 version calls/changes - reencryption happen in direct-io, offline mode and without locking, requiring linux 5.10.9+ to bypass linux queues - from tests, this is best for performance and reliability in single-user mode - LUKS container ops now validate Disk Recovery Key (DRK) passphrase prior and DRK key slot prior of going forward if needed, failing early. - Heads don't expect DRK to be in static key slot anymore, and finds the DRK key slot dynamically. - If reencrytipn/passphrase change: make sure all LUKS containers on same block device can be unlocked with same DRK - Reencryption: requires to know which key slot to reencrypt. - Find LUKS key slot that unlocks with DRK passphrase unlock prior of reencrypt call - Passphrase change: no slot can be passed, but key slot of DRK rotates. kexec-seal-key - TPM LUKS Disk Unlock Key key slots have changed to be set in max slots per LUKS version (LUKSv1:7 /LUKSv2: 31) - If key slot != default LUKS version's keyslot outside of DRK key slot: prompt the user before wiping that key slot, otherwise wipe automatically - This takes for granted that the DRK key slot alone is needed on the system and Heads controls the LUKS key slots. - If user has something else going on, ie: Using USB Security dongle + TPM DUK, then the user will need to say no when wiping keys. - It was suggested to leave LUKS key slots outside of DRK alone, but then: what to do when all key slots would be used? - Alternative implementation could be to only prompt users to wipe keyslots other then DRK when key slots are all used (LUKSv1: 0-7, LUKSv2: 0-31) - But then cleanup would need to happen prior of operations (LUKS passphrase change, TPM DUK setup) and could be problematic. - LUKS containers now checked to be same LUKS version prior of permitting to set TPM DUK and will refuse to go forward of different versions. TODO: - async (AIO) calls are not used. direct-io is used instead. libaio could be hacked out - this could be subject to future work Notes: - time to deprecated legacy boards the do not enough space for the new space requirements - x230-legacy, x230-legacy-flash, x230-hotp-legacy - t430-legacy, t430-legacy-flash, t430-hotp-legacy already deprecated Unrelated: - typos fixes found along the way Signed-off-by: Thierry Laurion <insurgo@riseup.net> |
||
---|---|---|
.circleci | ||
.github | ||
bin | ||
blobs | ||
boards | ||
branding/Heads | ||
build | ||
config | ||
initrd | ||
install | ||
modules | ||
packages | ||
patches | ||
targets | ||
unmaintained_boards | ||
.envrc | ||
.gitattributes | ||
.gitignore | ||
.gitlab-ci.yml.deprecated | ||
BOARD_TESTERS.md | ||
CONTRIBUTING.md | ||
COPYING | ||
FAQ.md | ||
flake.lock | ||
flake.nix | ||
funding.json | ||
Makefile | ||
README.md | ||
WP_NOTES.md |
Heads: the other side of TAILS
Heads is a configuration for laptops and servers that tries to bring more security to commodity hardware. Among its goals are:
- Use free software on the boot path
- Move the root of trust into hardware (or at least the ROM bootblock)
- Measure and attest to the state of the firmware
- Measure and verify all filesystems
NOTE: It is a work in progress and not yet ready for non-technical users. If you're interested in contributing, please get in touch. Installation requires disassembly of your laptop or server, external SPI flash programmers, possible risk of destruction and significant frustration.
More information is available in the 33C3 presentation of building "Slightly more secure systems".
Documentation
Please refer to Heads-wiki for your Heads' documentation needs.
Contributing
We welcome contributions to the Heads project! Before contributing, please read our Contributing Guidelines for information on how to get started, submit issues, and propose changes.
Building heads
Under QubesOS?
- Setup nix persistent layer under QubesOS (Thanks @rapenne-s !)
- Install docker under QubesOS (imperfect old article of mine. Better somewhere?)
Build docker from nix develop layer locally
Set up Nix and flakes
- If you don't already have Nix, install it:
[ -d /nix ] || sh <(curl -L https://nixos.org/nix/install) --no-daemon
. /home/user/.nix-profile/etc/profile.d/nix.sh
- Enable flake support in nix
mkdir -p ~/.config/nix
echo 'experimental-features = nix-command flakes' >>~/.config/nix/nix.conf
Build image
- Build nix developer local environment with flakes locked to specified versions
nix --print-build-logs --verbose develop --ignore-environment --command true
- Build docker image with current develop created environment (this will take a while and create "linuxboot/heads:dev-env" local docker image):
nix --print-build-logs --verbose build .#dockerImage && docker load < result
On some hardened OSes, you may encounter problems with ptrace.
> proot error: ptrace(TRACEME): Operation not permitted
The most likely reason is that your kernel.yama.ptrace_scope variable is too high and doesn't allow docker+nix to run properly. You'll need to set kernel.yama.ptrace_scope to 1 while you build the heads binary.
sudo sysctl kernel.yama.ptrace_scope #show you the actual value, probably 2 or 3
sudo sysctl -w kernel.yama.ptrace_scope=1 #setup the value to let nix+docker run properly
(don't forget to put back the value you had after finishing build head)
Done!
Your local docker image "linuxboot/heads:dev-env" is ready to use, reproducible for the specific Heads commit used and will produce ROMs reproducible for that Heads commit ID.
Jump into nix develop created docker image for interactive workflow
docker run -e DISPLAY=$DISPLAY --network host --rm -ti -v $(pwd):$(pwd) -w $(pwd) linuxboot/heads:dev-env
From there you can use the docker image interactively.
make BOARD=board_name
where board_name is the name of the board directory under ./boards
directory.
One such useful example is to build and test qemu board roms and test them through qemu/kvm/swtpm provided in the docker image. Please refer to qemu documentation for more information.
Eg:
make BOARD=qemu-coreboot-fbwhiptail-tpm2 # Build rom, export public key to emulated usb storage from qemu runtime
make BOARD=qemu-coreboot-fbwhiptail-tpm2 PUBKEY_ASC=~/pubkey.asc inject_gpg # Inject pubkey into rom image
make BOARD=qemu-coreboot-fbwhiptail-tpm2 USB_TOKEN=Nitrokey3NFC PUBKEY_ASC=~/pubkey.asc ROOT_DISK_IMG=~/qemu-disks/debian-9.cow2 INSTALL_IMG=~/Downloads/debian-9.13.0-amd64-xfce-CD-1.iso run # Install
Alternatively, you can use locally built docker image to build a board ROM image in a single call.
Eg:
docker run -e DISPLAY=$DISPLAY --network host --rm -ti -v $(pwd):$(pwd) -w $(pwd) linuxboot/heads:dev-env -- make BOARD=nitropad-nv41
Pull docker hub image to prepare reproducible ROMs as CircleCI in one call
docker run -e DISPLAY=$DISPLAY --network host --rm -ti -v $(pwd):$(pwd) -w $(pwd) tlaurion/heads-dev-env:latest -- make BOARD=x230-hotp-maximized
docker run -e DISPLAY=$DISPLAY --network host --rm -ti -v $(pwd):$(pwd) -w $(pwd) tlaurion/heads-dev-env:latest -- make BOARD=nitropad-nv41
Maintenance notes on docker image
Redo the steps above in case the flake.nix or nix.lock changes. Commit changes. Then publish on docker hub:
#put relevant things in variables:
docker_version="vx.y.z" && docker_hub_repo="tlaurion/heads-dev-env"
#update pinned packages to latest available ones if needed, modify flake.nix derivatives if needed:
nix flakes update
#modify CircleCI image to use newly pushed docker image
sed "s@\(image: \)\(.*\):\(v[0-9]*\.[0-9]*\.[0-9]*\)@\1\2:$docker_version@" -i .circleci/config.yml
# commit changes
git commit --signoff -m "Bump nix develop based docker image to $docker_hub_repo:$docker_version"
#use commited flake.nix and flake.lock in nix develop
nix --print-build-logs --verbose develop --ignore-environment --command true
#build new docker image from nix develop environement
nix --print-build-logs --verbose build .#dockerImage && docker load < result
#tag produced docker image with new version
docker tag linuxboot/heads:dev-env "$docker_hub_repo:$docker_version"
#push newly created docker image to docker hub
docker push "$docker_hub_repo:$docker_version"
#test with CircleCI in PR. Merge.
git push ...
#make last tested docker image version the latest
docker tag "$docker_hub_repo:$docker_version" "$docker_hub_repo:latest"
docker push "$docker_hub_repo:latest"
This can be put in reproducible oneliners to ease maintainership.
Test image in dirty mode:
docker_version="vx.y.z" && docker_hub_repo="tlaurion/heads-dev-env" && sed "s@\(image: \)\(.*\):\(v[0-9]*\.[0-9]*\.[0-9]*\)@\1\2:$docker_version@" -i .circleci/config.yml && nix --print-build-logs --verbose develop --ignore-environment --command true && nix --print-build-logs --verbose build .#dockerImage && docker load < result && docker tag linuxboot/heads:dev-env "$docker_hub_repo:$docker_version" && docker push "$docker_hub_repo:$docker_version"
Notes:
- Local builds can use ":latest" tag, which will use latest tested successful CircleCI run
- To reproduce CirlceCI results, make sure to use the same versioned tag declared under .circleci/config.yml's "image:"
General notes on reproducible builds
In order to build reproducible firmware images, Heads builds a specific
version of gcc and uses it to compile the Linux kernel and various tools
that go into the initrd. Unfortunately this means the first step is a
little slow since it will clone the musl-cross-make
tree and build gcc...
Once that is done, the top level Makefile
will handle most of the
remaining details -- it downloads the various packages, verifies the
hashes, applies Heads specific patches, configures and builds them
with the cross compiler, and then copies the necessary parts into
the initrd
directory.
There are still dependencies on the build system's coreutils in
/bin
and /usr/bin/
, but any problems should be detectable if you
end up with a different hash than the official builds.
The various components that are downloaded are in the ./modules
directory and include:
We also recommend installing Qubes OS,
although there Heads can kexec
into any Linux or
multiboot
kernel.
Notes:
- Building coreboot's cross compilers can take a while. Luckily this is only done once.
- Builds are finally reproducible! The reproduciblebuilds tag tracks any regressions.
- Currently only tested in QEMU, the Thinkpad x230, Librem series and the Chell Chromebook. ** Xen does not work in QEMU. Signing, HOTP, and TOTP do work; see below.
- Building for the Lenovo X220 requires binary blobs to be placed in the blobs/x220/ folder. See the readme.md file in that folder
- Building for the Librem 13 v2/v3 or Librem 15 v3/v4 requires binary blobs to be placed in the blobs/librem_skl folder. See the readme.md file in that folder
QEMU:
OS booting can be tested in QEMU using a software TPM. HOTP can be tested by forwarding a USB token from the host to the guest.
For more information and setup instructions, refer to the qemu documentation.
coreboot console messages
The coreboot console messages are stored in the CBMEM region
and can be read by the Linux payload with the cbmem --console | less
command. There is lots of interesting data about the state of the
system.