CLI Reference
The petra CLI provisions and manages Kubernetes clusters using the AWS SDK directly.
Installation
cd cli
go build -o petra .
Commands
Standalone Cluster Management
petra up -f cluster.yaml # Provision a cluster
petra destroy -f cluster.yaml # Tear down all resources
petra status -f cluster.yaml # Check instance status
petra kubeconfig -f cluster.yaml # Retrieve kubeconfig via SSM
CAPI Fleet Management
petra capi init -f mgmt.yaml # Boot management cluster + CAPI
petra capi destroy -f mgmt.yaml # Tear down management cluster
petra capi status -f mgmt.yaml # Management cluster health
petra capi kubeconfig -f mgmt.yaml # Management cluster kubeconfig
petra capi create -f workload.yaml # Create workload cluster
petra capi list # List workload clusters
petra capi delete -f workload.yaml # Delete workload cluster
petra capi cluster-kubeconfig -f w.yaml # Workload cluster kubeconfig
Bundle Management
petra bundle create -f bundle.yaml # Package images for air-gap
Utility
petra version # Print version info
petra help # Print usage
Environment Variables
| Variable | Default | Description |
|---|---|---|
AWS_PROFILE | shebashio | AWS credential profile |
PETRA_AMI | (none) | Override Flatcar AMI lookup with a custom AMI (e.g., pre-pulled CGR images) |
PETRA_K3S_BINARY | (none) | Path to a custom k3s binary (e.g., FIPS build from source) |
The region is specified in the cluster spec file, not via environment variable.
Cluster Spec
All cluster operations require a spec file passed with -f:
apiVersion: petra.sh/v1alpha1
kind: Cluster
metadata:
name: petra-dev
spec:
kubernetes:
version: v1.35.3+k3s1
profile: standard # standard | fips-stig
target:
type: aws
region: us-west-1
nodes:
controlPlane:
count: 1
instanceType: m5a.large
workers:
count: 2
instanceType: m5a.large
addons:
cilium:
enabled: true
hubble: true
flux:
enabled: true
gatekeeper:
enabled: false
tetragon:
enabled: false
certManager:
enabled: true
Resource Management
All AWS resources are tagged:
Project=petraCluster=<name>ManagedBy=petra-cli
The destroy command discovers resources by tag -- no state file required.
Access Model
All access is via SSM Session Manager. No SSH keys, no port 22.
aws ssm start-session --target <instance-id> --region us-west-1