Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
nixos/doc: clean up defaults and examples
|
|
|
|
|
|
The progress ID is fairly useless. Status message is more useful for
humans.
|
|
|
|
|
|
|
|
Having a disks object with a dictionary of all the disks and their
properties makes it easier to process multi-disk images.
Note the rename of `label` to `system_label` is because `$label`i
is something of a special token to jq.
|
|
Introduce an AWS EC2 AMI which supports aarch64 and x86_64 with a ZFS
root.
This uses `make-zfs-image` which implies two EBS volumes are needed
inside EC2, one for boot, one for root. It should not matter which
is identified `xvda` and which is `xvdb`, though I have always
uploaded `boot` as `xvda`.
|
|
For reasons we haven't been able to work out, the aarch64 EC2 image now
regularly exceeds the output image size on hydra.nixos.org. As a
workaround, set this back to being statically sized again.
The other images do seem to build - it's just a case of the EC2 image
now being too large (occasionally non-determinstically).
|
|
(cherry picked from commit f3aa040bcbf39935e7e9ac7a7296eac9da7623ec)
|
|
This reverts commit f3aa040bcbf39935e7e9ac7a7296eac9da7623ec.
|
|
This reverts commit 6a8359a92ab501ae62739e9d3302f48e3e73c750.
|
|
As a temporary workaround for #120473 while the image builder is patched
to correctly look up disk sizes, partially revert
f3aa040bcbf39935e7e9ac7a7296eac9da7623ec for EC2 disk images only.
We retain the type allowing "auto" but set the default back to the
previous value.
|
|
|
|
update the create-gce.sh script with the ability to create public images
out of a GS object.
|
|
GP3 is always faster and cheaper than GP2, so sticking to GP2 is
leaving money on the table.
https://cloudwiry.com/ebs-gp3-vs-gp2-pricing-comparison/
|
|
AMI root partition table: use GPT to support >2T partitions
|
|
|
|
Co-authored-by: Cole Helbling <cole.e.helbling@outlook.com>
|
|
The complete setup on the AWS end can be configured
with the following Terraform configuration. It generates
a ./credentials.sh which I just copy/pasted in to the
create-amis.sh script near the top. Note: the entire stack
of users and bucket can be destroyed at the end of the
import.
variable "region" {
type = string
}
variable "availability_zone" {
type = string
}
provider "aws" {
region = var.region
}
resource "aws_s3_bucket" "nixos-amis" {
bucket_prefix = "nixos-amis-"
lifecycle_rule {
enabled = true
abort_incomplete_multipart_upload_days = 1
expiration {
days = 7
}
}
}
resource "local_file" "credential-file" {
file_permission = "0700"
filename = "${path.module}/credentials.sh"
sensitive_content = <<SCRIPT
export service_role_name="${aws_iam_role.vmimport.name}"
export bucket="${aws_s3_bucket.nixos-amis.bucket}"
export AWS_ACCESS_KEY_ID="${aws_iam_access_key.uploader.id}"
export AWS_SECRET_ACCESS_KEY="${aws_iam_access_key.uploader.secret}"
SCRIPT
}
# The following resources are for the *uploader*
resource "aws_iam_user" "uploader" {
name = "nixos-amis-uploader"
}
resource "aws_iam_access_key" "uploader" {
user = aws_iam_user.uploader.name
}
resource "aws_iam_user_policy" "upload-to-nixos-amis" {
user = aws_iam_user.uploader.name
policy = data.aws_iam_policy_document.upload-policy-document.json
}
data "aws_iam_policy_document" "upload-policy-document" {
statement {
effect = "Allow"
actions = [
"s3:ListBucket",
"s3:GetBucketLocation",
]
resources = [
aws_s3_bucket.nixos-amis.arn
]
}
statement {
effect = "Allow"
actions = [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
]
resources = [
"${aws_s3_bucket.nixos-amis.arn}/*"
]
}
statement {
effect = "Allow"
actions = [
"ec2:ImportSnapshot",
"ec2:DescribeImportSnapshotTasks",
"ec2:DescribeImportSnapshotTasks",
"ec2:RegisterImage",
"ec2:DescribeImages"
]
resources = [
"*"
]
}
}
# The following resources are for the *vmimport service user*
# See: https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html#vmimport-role
resource "aws_iam_role" "vmimport" {
assume_role_policy = data.aws_iam_policy_document.vmimport-trust.json
}
resource "aws_iam_role_policy" "vmimport-access" {
role = aws_iam_role.vmimport.id
policy = data.aws_iam_policy_document.vmimport-access.json
}
data "aws_iam_policy_document" "vmimport-access" {
statement {
effect = "Allow"
actions = [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
]
resources = [
aws_s3_bucket.nixos-amis.arn,
"${aws_s3_bucket.nixos-amis.arn}/*"
]
}
statement {
effect = "Allow"
actions = [
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
]
resources = [
"*"
]
}
}
data "aws_iam_policy_document" "vmimport-trust" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = [ "vmie.amazonaws.com" ]
}
actions = [
"sts:AssumeRole"
]
condition {
test = "StringEquals"
variable = "sts:ExternalId"
values = [ "vmimport" ]
}
}
}
|
|
tasks fails
|
|
|
|
block_device_mappings single strings
|
|
|
|
to avoid masking return values.
|
|
|
|
|
|
nixos/maintainers/scripts/ec2/create-amis.sh: fix argument check
|
|
This is required when migrating to flakes
|
|
|
|
Because this script enables `set -u` when no arguments are provided bash
exits with the error:
$1: unbound variable
instead of the helpful usage message.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
NixOS 20.03 is built on kernel 5.4 and 19.09 is on 4.19, so we should update
this option to the highest value possible, per linked upstream instructions from
Amazon.
|
|
|
|
I'm not really sure how the line directly after ended up with this,
but this line didn't...
|
|
For the case of blkfront drives, there appears to be no difference
between /dev/sda1 and /dev/xvda: the drive always appears as the
kernel device /dev/xvda.
For the case of nvme drives, the root device typically appears as
/dev/nvme0n1. Amazon provides the 'ec2-utils' package for their first
party linux ("Amazon Linux"), which configures udev to create symlinks
from the provided name to the nvme device name. This name is
communicated through nvme "Identify Controller" response, which can be
inspected with:
nvme id-ctrl --raw-binary /dev/nvme0n1 | cut -c3073-3104 | hexdump -C
On Amazon Linux, where the device is attached as "/dev/xvda", this
creates:
- /dev/xvda -> nvme0n1
- /dev/xvda1 -> nvme0n1p1
On NixOS where the device is attach as "/dev/sda1", this creates:
- /dev/sda1 -> nvme0n1
- /dev/sda11 -> nvme0n1p1
This is odd, but not inherently a problem.
NixOS unconditionally configures grub to install to `/dev/xvda`, which
fails on an instance using nvme storage. With the root device name set
to xvda, both blkfront and nvme drives are accessible as /dev/xvda,
either directly or by symlink.
|
|
replace /home/deploy -> $HOME to allow running the script from outside
the bastion.
|
|
|
|
|
|
|