Terraform für Fortgeschrittene: Terraform Syntax und HCL Masterclass

Du hast die Terraform-Grundlagen gemeistert und dein erstes Projekt erfolgreich umgesetzt. Jetzt ist es Zeit für den nächsten Schritt: die Beherrschung der HashiCorp Configuration Language (HCL) in all ihren Facetten. Dieser Artikel führt dich von erweiterten HCL-Funktionen zu professionellen Syntax-Techniken, die deine Terraform-Konfigurationen wartbar, wiederverwendbar und robust machen.

Aufbauend auf unserem Artikel Terraform Grundlagen vertiefen wir die Sprach-Features, die den Unterschied zwischen einfachen Terraform-Skripten und professionellen Infrastruktur-Definitionen ausmachen. Du lernst erweiterte Datenstrukturen, komplexe Validierungen, sensitive Daten-Handling und die mächtigen Built-in-Funktionen von Terraform. Jeder Abschnitt kombiniert theoretisches Verständnis mit praktischen Beispielen, die du direkt anwenden kannst.

Zusätzliche Voraussetzungen für diesen Artikel:
┌ Grundlegendes Verständnis von Terraform-Konzepten (Provider, Ressourcen, State)
├ Erfahrung mit dem Terraform-Workflow (init, plan, apply)
├ Praktische Kenntnisse der HCL-Grundsyntax
└ Erste Erfahrungen mit Variablen und Outputs
Verwendete Symbole und Markierungen

💡 Tipps und Hinweise für effizientere Arbeitsweisen
⚠️ Warnungen und Stolperfallen, die dir Probleme ersparen
🔧 Praktische Beispiele zum direkten Nachvollziehen
❗ Typische Fehlerquellen und deren Lösungen

Ziele dieses Artikels: Nach dem Durcharbeiten dieses Artikels beherrschst du die erweiterten HCL-Features, die für professionelle Terraform-Konfigurationen unerlässlich sind. Du verstehst komplexe Datenstrukturen, kannst robuste Validierungen implementieren und weißt, wie du sensitive Daten sicher verwaltest. Built-in-Funktionen werden zu deinen Werkzeugen für elegante, wiederverwendbare Infrastruktur-Definitionen.

Dieser Artikel macht dich zum HCL-Experten, der wartbare, sichere und effiziente Terraform-Konfigurationen schreibt. Du wirst bereit sein für komplexere Themen wie State Management, AWS-Infrastrukturen und Team-Workflows, die in den folgenden Artikeln unserer Serie behandelt werden.

Von erweiterten Syntax-Features bis hin zu Built-in-Funktionen – dieser Artikel verwandelt dich in einen Terraform-Entwickler, der nicht nur funktionalen, sondern eleganten und professionellen Code schreibt.

Terraform Syntax und HCL

In den Grundlagen hast du bereits erste HCL-Syntax kennengelernt und praktisch angewendet. Jetzt tauchen wir tiefer in die HashiCorp Configuration Language ein. HCL ist mehr als nur Syntax – es ist die Sprache, in der du deine gesamte Infrastruktur ausdrückst. Die erweiterten Features von HCL ermöglichen es dir, komplexe Logik zu implementieren, dynamische Konfigurationen zu erstellen und wartbare Infrastruktur-Definitionen zu schreiben.

Erweiterte HCL-Syntax

Was macht HCL zu einer mächtigen Konfigurationssprache? HCL kombiniert die Einfachheit einer Konfigurationssprache mit der Flexibilität einer Programmiersprache. Im Gegensatz zu reinem JSON oder YAML bietet HCL Funktionen, Variablen und Conditional Logic. Das ermöglicht es dir, einmal geschriebene Konfigurationen für verschiedene Szenarien wiederzuverwenden.

Warum ist das Verständnis der erweiterten Syntax entscheidend? Ohne die fortgeschrittenen HCL-Features schreibst du repetitiven Code, der schwer zu warten ist. Mit erweiterten Funktionen erstellst du dynamische, wiederverwendbare Konfigurationen, die sich an verschiedene Umgebungen anpassen können. Das reduziert nicht nur die Fehlerquote, sondern macht deine Infrastruktur auch skalierbarer.

Worauf solltest du bei erweiterten HCL-Features achten? Die Macht der erweiterten Syntax kann auch zu komplexer, schwer verständlicher Konfiguration führen. Balance zwischen Funktionalität und Lesbarkeit ist entscheidend. Jede Komplexität muss einen klaren Nutzen haben.

Wofür verwendest du erweiterte HCL-Syntax? Hauptsächlich für Multi-Environment-Konfigurationen, dynamische Ressourcen-Erstellung, komplexe Validierungen und die Automatisierung von Infrastruktur-Patterns, die sich wiederholen.

Komplexe Datenstrukturen (Objects, Sets, Maps)

Was sind komplexe Datenstrukturen in HCL? Neben den einfachen Datentypen (string, number, bool) bietet HCL komplexe Strukturen wie Objects, Sets und Maps. Diese ermöglichen es dir, hierarchische Daten zu organisieren und komplexe Konfigurationen übersichtlich zu strukturieren.

Warum sind komplexe Datenstrukturen wichtig? Sie lösen das Problem der Variable-Explosion. Statt dutzende einzelne Variablen zu definieren, kannst du zusammengehörige Daten in logischen Strukturen organisieren. Das macht deine Konfiguration wartbarer und reduziert Fehlerquellen.

Worauf musst du bei komplexen Datenstrukturen achten? Typkonsistenz ist entscheidend. HCL ist typsicher, das bedeutet, dass du nicht einfach verschiedene Datentypen mischen kannst. Außerdem können verschachtelte Strukturen die Debugging-Komplexität erhöhen.

Wofür verwendest du komplexe Datenstrukturen? Für Konfigurationen, die logisch zusammengehören: Database-Configurations, Network-Settings, Application-Configs und Multi-Environment-Definitionen.

Object-Datentyp im Detail:

Objects sind strukturierte Datentypen mit benannten Attributen. Jedes Attribut kann einen anderen Datentyp haben. Sie sind ideal für Konfigurationen, die mehrere verwandte Eigenschaften haben.

# Erweiterte Object-Definition für Database-Konfiguration
variable "database_config" {
  description = "Complete database configuration for different environments"
  type = object({
    # Basis-Konfiguration
    engine         = string
    engine_version = string
    instance_class = string
    
    # Storage-Konfiguration
    storage_type      = string
    allocated_storage = number
    max_storage      = number
    storage_encrypted = bool
    
    # Backup-Konfiguration
    backup_enabled         = bool
    backup_retention_days  = number
    backup_window         = string
    maintenance_window    = string
    
    # Monitoring-Konfiguration
    monitoring_enabled    = bool
    monitoring_interval   = number
    performance_insights  = bool
    
    # Network-Konfiguration
    subnet_group_name     = string
    security_group_ids    = list(string)
    publicly_accessible   = bool
    
    # Tags und Metadaten
    tags = map(string)
    
    # Erweiterte Optionen
    parameters = map(string)
  })
  
  validation {
    condition = contains(["mysql", "postgres", "mariadb"], var.database_config.engine)
    error_message = "Database engine must be mysql, postgres, or mariadb."
  }
  
  validation {
    condition = var.database_config.allocated_storage >= 20 && var.database_config.allocated_storage <= 1000
    error_message = "Allocated storage must be between 20 and 1000 GB."
  }
  
  validation {
    condition = var.database_config.backup_retention_days >= 0 && var.database_config.backup_retention_days <= 35
    error_message = "Backup retention must be between 0 and 35 days."
  }
}

# Umgebungsspezifische Database-Konfigurationen
locals {
  database_configs = {
    dev = {
      engine         = "mysql"
      engine_version = "8.0"
      instance_class = "db.t3.micro"
      
      storage_type      = "gp2"
      allocated_storage = 20
      max_storage      = 50
      storage_encrypted = false
      
      backup_enabled        = false
      backup_retention_days = 0
      backup_window        = "03:00-04:00"
      maintenance_window   = "sun:04:00-sun:05:00"
      
      monitoring_enabled   = false
      monitoring_interval  = 0
      performance_insights = false
      
      subnet_group_name   = "dev-db-subnet-group"
      security_group_ids  = ["sg-dev-db"]
      publicly_accessible = false
      
      tags = {
        Environment = "development"
        BackupPolicy = "none"
        CostCenter  = "engineering"
      }
      
      parameters = {
        "innodb_buffer_pool_size" = "128M"
        "max_connections"         = "100"
      }
    }
    
    staging = {
      engine         = "mysql"
      engine_version = "8.0"
      instance_class = "db.t3.small"
      
      storage_type      = "gp3"
      allocated_storage = 50
      max_storage      = 200
      storage_encrypted = true
      
      backup_enabled        = true
      backup_retention_days = 7
      backup_window        = "03:00-04:00"
      maintenance_window   = "sun:04:00-sun:05:00"
      
      monitoring_enabled   = true
      monitoring_interval  = 60
      performance_insights = true
      
      subnet_group_name   = "staging-db-subnet-group"
      security_group_ids  = ["sg-staging-db"]
      publicly_accessible = false
      
      tags = {
        Environment = "staging"
        BackupPolicy = "weekly"
        CostCenter  = "engineering"
      }
      
      parameters = {
        "innodb_buffer_pool_size" = "256M"
        "max_connections"         = "200"
      }
    }
    
    prod = {
      engine         = "mysql"
      engine_version = "8.0"
      instance_class = "db.r5.large"
      
      storage_type      = "gp3"
      allocated_storage = 200
      max_storage      = 1000
      storage_encrypted = true
      
      backup_enabled        = true
      backup_retention_days = 30
      backup_window        = "03:00-04:00"
      maintenance_window   = "sun:04:00-sun:05:00"
      
      monitoring_enabled   = true
      monitoring_interval  = 15
      performance_insights = true
      
      subnet_group_name   = "prod-db-subnet-group"
      security_group_ids  = ["sg-prod-db-primary", "sg-prod-db-backup"]
      publicly_accessible = false
      
      tags = {
        Environment = "production"
        BackupPolicy = "daily"
        CostCenter  = "operations"
        Compliance  = "required"
      }
      
      parameters = {
        "innodb_buffer_pool_size" = "1024M"
        "max_connections"         = "500"
        "slow_query_log"         = "1"
        "long_query_time"        = "2"
      }
    }
  }
  
  # Aktuelle Environment-Konfiguration
  current_db_config = local.database_configs[var.environment]
}

# RDS-Instanz mit Object-Konfiguration
resource "aws_db_instance" "main" {
  identifier = "${var.project_name}-${var.environment}-db"
  
  # Engine-Konfiguration
  engine         = local.current_db_config.engine
  engine_version = local.current_db_config.engine_version
  instance_class = local.current_db_config.instance_class
  
  # Storage-Konfiguration
  storage_type      = local.current_db_config.storage_type
  allocated_storage = local.current_db_config.allocated_storage
  max_allocated_storage = local.current_db_config.max_storage
  storage_encrypted = local.current_db_config.storage_encrypted
  
  # Backup-Konfiguration
  backup_retention_period = local.current_db_config.backup_retention_days
  backup_window          = local.current_db_config.backup_window
  maintenance_window     = local.current_db_config.maintenance_window
  
  # Monitoring-Konfiguration
  monitoring_interval                = local.current_db_config.monitoring_interval
  monitoring_role_arn               = local.current_db_config.monitoring_enabled ? aws_iam_role.rds_monitoring[0].arn : null
  performance_insights_enabled       = local.current_db_config.performance_insights
  performance_insights_retention_period = local.current_db_config.performance_insights ? 7 : null
  
  # Network-Konfiguration
  db_subnet_group_name   = local.current_db_config.subnet_group_name
  vpc_security_group_ids = local.current_db_config.security_group_ids
  publicly_accessible    = local.current_db_config.publicly_accessible
  
  # Database-Konfiguration
  db_name  = "${var.project_name}_${var.environment}"
  username = var.db_username
  password = var.db_password
  
  # Tags
  tags = merge(
    local.current_db_config.tags,
    {
      Name = "${var.project_name}-${var.environment}-db"
    }
  )
  
  # Lifecycle-Management
  lifecycle {
    prevent_destroy = true
  }
}

# Parameter Group für Database-Optimierung
resource "aws_db_parameter_group" "main" {
  family = "${local.current_db_config.engine}${substr(local.current_db_config.engine_version, 0, 3)}"
  name   = "${var.project_name}-${var.environment}-params"
  
  dynamic "parameter" {
    for_each = local.current_db_config.parameters
    content {
      name  = parameter.key
      value = parameter.value
    }
  }
  
  tags = local.current_db_config.tags
}
HCL
💡 Object-Optimierung: Verwende Objects für Konfigurationen, die logisch zusammengehören. Das macht deine Terraform-Konfiguration nicht nur übersichtlicher, sondern auch typsicherer.

Set-Datentyp erweitert:

Sets sind Sammlungen einzigartiger Werte ohne bestimmte Reihenfolge. Sie sind ideal für Konfigurationen, wo die Reihenfolge irrelevant ist, aber Eindeutigkeit wichtig ist.

# Erweiterte Set-Konfiguration für Security Groups
variable "security_rules" {
  description = "Security group rules configuration"
  type = set(object({
    type        = string
    from_port   = number
    to_port     = number
    protocol    = string
    cidr_blocks = list(string)
    description = string
  }))
  
  default = [
    {
      type        = "ingress"
      from_port   = 22
      to_port     = 22
      protocol    = "tcp"
      cidr_blocks = ["10.0.0.0/16"]
      description = "SSH access from VPC"
    },
    {
      type        = "ingress"
      from_port   = 80
      to_port     = 80
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
      description = "HTTP access from internet"
    },
    {
      type        = "ingress"
      from_port   = 443
      to_port     = 443
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
      description = "HTTPS access from internet"
    },
    {
      type        = "egress"
      from_port   = 0
      to_port     = 0
      protocol    = "-1"
      cidr_blocks = ["0.0.0.0/0"]
      description = "All outbound traffic"
    }
  ]
}

# Umgebungsspezifische Security-Regeln
locals {
  # Basis-Regeln für alle Umgebungen
  base_security_rules = [
    {
      type        = "egress"
      from_port   = 0
      to_port     = 0
      protocol    = "-1"
      cidr_blocks = ["0.0.0.0/0"]
      description = "All outbound traffic"
    }
  ]
  
  # Entwicklungsumgebung - weniger restriktiv
  dev_security_rules = [
    {
      type        = "ingress"
      from_port   = 22
      to_port     = 22
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]  # Offen für Entwicklung
      description = "SSH access for development"
    },
    {
      type        = "ingress"
      from_port   = 80
      to_port     = 80
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
      description = "HTTP access"
    },
    {
      type        = "ingress"
      from_port   = 8080
      to_port     = 8080
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
      description = "Development server"
    }
  ]
  
  # Produktionsumgebung - restriktiv
  prod_security_rules = [
    {
      type        = "ingress"
      from_port   = 22
      to_port     = 22
      protocol    = "tcp"
      cidr_blocks = ["10.0.0.0/16"]  # Nur aus VPC
      description = "SSH access from VPC only"
    },
    {
      type        = "ingress"
      from_port   = 80
      to_port     = 80
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
      description = "HTTP access"
    },
    {
      type        = "ingress"
      from_port   = 443
      to_port     = 443
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
      description = "HTTPS access"
    }
  ]
  
  # Umgebungsspezifische Regeln zusammenführen
  environment_rules = var.environment == "prod" ? local.prod_security_rules : local.dev_security_rules
  
  # Finale Regeln-Set
  final_security_rules = toset(concat(local.base_security_rules, local.environment_rules))
}

# Security Group mit dynamischen Regeln
resource "aws_security_group" "app" {
  name_prefix = "${var.project_name}-${var.environment}-app-"
  description = "Security group for ${var.project_name} application"
  vpc_id      = var.vpc_id
  
  # Dynamische Regeln-Erstellung
  dynamic "ingress" {
    for_each = { for rule in local.final_security_rules : "${rule.type}-${rule.from_port}-${rule.to_port}" => rule if rule.type == "ingress" }
    content {
      from_port   = ingress.value.from_port
      to_port     = ingress.value.to_port
      protocol    = ingress.value.protocol
      cidr_blocks = ingress.value.cidr_blocks
      description = ingress.value.description
    }
  }
  
  dynamic "egress" {
    for_each = { for rule in local.final_security_rules : "${rule.type}-${rule.from_port}-${rule.to_port}" => rule if rule.type == "egress" }
    content {
      from_port   = egress.value.from_port
      to_port     = egress.value.to_port
      protocol    = egress.value.protocol
      cidr_blocks = egress.value.cidr_blocks
      description = egress.value.description
    }
  }
  
  tags = {
    Name = "${var.project_name}-${var.environment}-app-sg"
    Environment = var.environment
  }
  
  lifecycle {
    create_before_destroy = true
  }
}
HCL

Map-Datentyp für komplexe Konfigurationen:

Maps sind Schlüssel-Wert-Paare, die komplexere Strukturen als Werte enthalten können. Sie sind ideal für Konfigurationen, die über Keys addressiert werden müssen.

# Erweiterte Map-Konfiguration für Multi-Region-Deployment
variable "region_configurations" {
  description = "Configuration for different AWS regions"
  type = map(object({
    # Region-spezifische Einstellungen
    availability_zones = list(string)
    cidr_block        = string
    
    # Compute-Konfiguration
    instance_types = object({
      web = string
      app = string
      db  = string
    })
    
    # Auto Scaling-Konfiguration
    scaling_config = object({
      min_size         = number
      max_size         = number
      desired_capacity = number
      scale_up_threshold   = number
      scale_down_threshold = number
    })
    
    # Backup-Konfiguration
    backup_config = object({
      enabled = bool
      retention_days = number
      cross_region_copy = bool
      destination_region = string
    })
    
    # Monitoring-Konfiguration
    monitoring_config = object({
      enabled = bool
      detailed_monitoring = bool
      log_retention_days = number
      alert_endpoints = list(string)
    })
    
    # Compliance-Einstellungen
    compliance_config = object({
      encryption_required = bool
      audit_logging = bool
      vpc_flow_logs = bool
      cloudtrail_enabled = bool
    })
    
    # Kosten-Optimierung
    cost_config = object({
      spot_instances_enabled = bool
      reserved_instances = bool
      auto_shutdown = bool
      shutdown_schedule = string
    })
  }))
  
  default = {
    us-east-1 = {
      availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
      cidr_block        = "10.0.0.0/16"
      
      instance_types = {
        web = "t3.small"
        app = "t3.medium"
        db  = "db.t3.small"
      }
      
      scaling_config = {
        min_size             = 2
        max_size             = 10
        desired_capacity     = 3
        scale_up_threshold   = 70
        scale_down_threshold = 30
      }
      
      backup_config = {
        enabled = true
        retention_days = 30
        cross_region_copy = true
        destination_region = "us-west-2"
      }
      
      monitoring_config = {
        enabled = true
        detailed_monitoring = true
        log_retention_days = 90
        alert_endpoints = ["alerts-east@company.com"]
      }
      
      compliance_config = {
        encryption_required = true
        audit_logging = true
        vpc_flow_logs = true
        cloudtrail_enabled = true
      }
      
      cost_config = {
        spot_instances_enabled = false
        reserved_instances = true
        auto_shutdown = false
        shutdown_schedule = "never"
      }
    }
    
    us-west-2 = {
      availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
      cidr_block        = "10.1.0.0/16"
      
      instance_types = {
        web = "t3.micro"
        app = "t3.small"
        db  = "db.t3.micro"
      }
      
      scaling_config = {
        min_size             = 1
        max_size             = 5
        desired_capacity     = 2
        scale_up_threshold   = 80
        scale_down_threshold = 20
      }
      
      backup_config = {
        enabled = true
        retention_days = 7
        cross_region_copy = false
        destination_region = ""
      }
      
      monitoring_config = {
        enabled = true
        detailed_monitoring = false
        log_retention_days = 30
        alert_endpoints = ["alerts-west@company.com"]
      }
      
      compliance_config = {
        encryption_required = false
        audit_logging = false
        vpc_flow_logs = false
        cloudtrail_enabled = false
      }
      
      cost_config = {
        spot_instances_enabled = true
        reserved_instances = false
        auto_shutdown = true
        shutdown_schedule = "0 22 * * MON-FRI"
      }
    }
  }
}

# Regionale Konfiguration anwenden
locals {
  current_region_config = var.region_configurations[var.aws_region]
}

# VPC mit regionaler Konfiguration
resource "aws_vpc" "main" {
  cidr_block           = local.current_region_config.cidr_block
  enable_dns_hostnames = true
  enable_dns_support   = true
  
  tags = {
    Name = "${var.project_name}-${var.environment}-vpc"
    Region = var.aws_region
  }
}

# Subnets basierend auf regionaler Konfiguration
resource "aws_subnet" "public" {
  count = length(local.current_region_config.availability_zones)
  
  vpc_id                  = aws_vpc.main.id
  cidr_block              = cidrsubnet(local.current_region_config.cidr_block, 8, count.index)
  availability_zone       = local.current_region_config.availability_zones[count.index]
  map_public_ip_on_launch = true
  
  tags = {
    Name = "${var.project_name}-${var.environment}-public-${count.index + 1}"
    Type = "public"
  }
}

# Launch Template mit regionaler Konfiguration
resource "aws_launch_template" "app" {
  name_prefix   = "${var.project_name}-${var.environment}-app-"
  image_id      = data.aws_ami.ubuntu.id
  instance_type = local.current_region_config.instance_types.app
  
  # Spot-Instanzen wenn konfiguriert
  instance_market_options {
    market_type = local.current_region_config.cost_config.spot_instances_enabled ? "spot" : null
    
    dynamic "spot_options" {
      for_each = local.current_region_config.cost_config.spot_instances_enabled ? [1] : []
      content {
        spot_instance_type = "one-time"
      }
    }
  }
  
  # Monitoring-Konfiguration
  monitoring {
    enabled = local.current_region_config.monitoring_config.detailed_monitoring
  }
  
  # Erweiterte EBS-Konfiguration
  block_device_mappings {
    device_name = "/dev/sda1"
    ebs {
      volume_size = 20
      volume_type = "gp3"
      encrypted   = local.current_region_config.compliance_config.encryption_required
      iops        = 3000
    }
  }
  
  user_data = base64encode(templatefile("${path.module}/userdata.sh", {
    region = var.aws_region
    environment = var.environment
    monitoring_enabled = local.current_region_config.monitoring_config.enabled
    log_retention = local.current_region_config.monitoring_config.log_retention_days
  }))
  
  tag_specifications {
    resource_type = "instance"
    tags = {
      Name = "${var.project_name}-${var.environment}-app"
      Region = var.aws_region
      CostOptimized = local.current_region_config.cost_config.spot_instances_enabled
    }
  }
}

# Auto Scaling Group mit regionaler Konfiguration
resource "aws_autoscaling_group" "app" {
  name                = "${var.project_name}-${var.environment}-app-asg"
  vpc_zone_identifier = aws_subnet.public[*].id
  
  min_size         = local.current_region_config.scaling_config.min_size
  max_size         = local.current_region_config.scaling_config.max_size
  desired_capacity = local.current_region_config.scaling_config.desired_capacity
  
  launch_template {
    id      = aws_launch_template.app.id
    version = "$Latest"
  }
  
  # Auto Shutdown für Kostenoptimierung
  dynamic "tag" {
    for_each = local.current_region_config.cost_config.auto_shutdown ? [1] : []
    content {
      key                 = "AutoShutdown"
      value               = "true"
      propagate_at_launch = true
    }
  }
  
  tag {
    key                 = "Name"
    value               = "${var.project_name}-${var.environment}-app"
    propagate_at_launch = true
  }
}
HCL

Datenstruktur-Vergleich und Anwendungsgebiete:

DatentypCharakteristikaPerformanceAnwendungsfallDebugging-Schwierigkeit
ObjectStruktur mit benannten AttributenHochKonfigurationsgruppenNiedrig
SetEinzigartige Werte, keine ReihenfolgeMittelSecurity Groups, TagsMittel
MapSchlüssel-Wert-PaareHochEnvironment-ConfigsMittel
ListGeordnete SammlungNiedrigSequentielle DatenNiedrig

🔧 Praktisches Beispiel – Datenstruktur-Auswahl:

# Richtige Datenstruktur wählen
locals {
  # ✅ Object für zusammengehörige Konfiguration
  database_config = {
    engine = "mysql"
    version = "8.0"
    storage = 100
  }
  
  # ✅ Set für einzigartige Werte ohne Reihenfolge
  allowed_cidrs = toset([
    "10.0.0.0/16",
    "172.16.0.0/12",
    "192.168.0.0/16"
  ])
  
  # ✅ Map für Key-Value-Zuordnungen
  environment_settings = {
    dev  = { instance_type = "t3.micro", count = 1 }
    prod = { instance_type = "t3.large", count = 3 }
  }
  
  # ✅ List für geordnete Daten
  deployment_stages = [
    "build",
    "test",
    "deploy",
    "verify"
  ]
}
HCL
💡 Performance-Tipp: Objects und Maps haben O(1) Zugriff über Keys, während Lists O(n) für die Suche benötigen.
Escape-Sequenzen und String-Interpolation

Was sind Escape-Sequenzen und String-Interpolation? Escape-Sequenzen ermöglichen es dir, Sonderzeichen in Strings zu verwenden. String-Interpolation macht deine Konfigurationen dynamisch, indem du Variablen, Funktionen und Ausdrücke direkt in Strings einbettest.

Warum sind sie in Terraform so wichtig? Terraform arbeitet intensiv mit Strings – von Resource-Namen über Template-Dateien bis hin zu User-Data-Scripts. Ohne korrekte String-Behandlung entstehen Syntax-Fehler und Sicherheitslücken.

Worauf musst du bei String-Operationen achten? String-Interpolation kann zu Injection-Vulnerabilities führen, wenn nicht ordnungsgemäß escaped wird. Außerdem können komplexe String-Operationen die Terraform-Performance beeinträchtigen.

Wofür verwendest du erweiterte String-Features? Für dynamische Konfiguration, Template-Generierung, JSON/YAML-Erstellung und komplexe Namensgebung.

# Erweiterte String-Operationen
locals {
  # Basis-Variablen
  project_name = "ecommerce-platform"
  environment  = "production"
  region       = "us-east-1"
  
  # Erweiterte Escape-Sequenzen
  json_config = jsonencode({
    # Strings mit Anführungszeichen
    message = "Welcome to \"${local.project_name}\" platform"
    
    # Linux-Pfade für Logs und Konfiguration
    log_path = "/var/log/${local.project_name}/application.log"
    config_path = "/etc/${local.project_name}/config"
    data_path = "/opt/${local.project_name}/data"
    
    # Unix-Socket-Pfade
    socket_path = "/var/run/${local.project_name}/${local.project_name}.sock"
    
    # Unicode-Zeichen
    company_name = "Acme Corp™"
    
    # Newlines und Formatierung
    welcome_text = <<-EOT
      Welcome to ${local.project_name}!
      Environment: ${local.environment}
      Region: ${local.region}
      
      For support, contact: support@company.com
    EOT
  })
  
  # Komplexe String-Interpolation
  resource_naming = {
    # Basis-Naming
    prefix = "${local.project_name}-${local.environment}"
    
    # Conditional Naming
    db_name = "${local.project_name}-${local.environment == "prod" ? "production" : "development"}-db"
    
    # Timestamp-basierte Namen
    backup_name = "${local.project_name}-backup-${formatdate("YYYY-MM-DD-hhmm", timestamp())}"
    
    # Hash-basierte eindeutige Namen
    unique_bucket = "${local.project_name}-${local.environment}-${substr(md5("${local.project_name}-${local.environment}-${timestamp()}"), 0, 8)}"
  }
  
  # Template-Variablen für verschiedene Formate
  template_vars = {
    # Basis-Informationen
    project_name = local.project_name
    environment  = local.environment
    region       = local.region
    
    # Bedingte Werte
    debug_mode = local.environment != "prod"
    ssl_enabled = local.environment == "prod"
    
    # Berechnete Werte
    instance_count = local.environment == "prod" ? 3 : 1
    storage_size   = local.environment == "prod" ? 100 : 20
    
    # Formatierte Strings
    formatted_date = formatdate("YYYY-MM-DD", timestamp())
    formatted_time = formatdate("hh:mm:ss", timestamp())
  }
}

# Erweiterte Heredoc-Syntax für komplexe Templates
resource "aws_instance" "app" {
  count         = local.template_vars.instance_count
  ami           = data.aws_ami.ubuntu.id
  instance_type = local.environment == "prod" ? "t3.large" : "t3.micro"
  
  # Komplexes User Data mit erweiterten String-Features
  user_data = base64encode(templatefile("${path.module}/templates/userdata.sh", merge(local.template_vars, {
    instance_index = count.index
    instance_id    = count.index + 1
    total_instances = local.template_vars.instance_count
  })))
  
  tags = {
    Name = "${local.resource_naming.prefix}-app-${count.index + 1}"
    Role = "application"
  }
}

# Erweiterte JSON-Konfiguration für CloudWatch
resource "aws_cloudwatch_log_group" "app" {
  name              = "/aws/ec2/${local.resource_naming.prefix}/application"
  retention_in_days = local.environment == "prod" ? 90 : 7
  
  tags = {
    Name = "${local.resource_naming.prefix}-logs"
  }
}

# S3 Bucket Policy mit komplexer String-Interpolation
resource "aws_s3_bucket_policy" "app_data" {
  bucket = aws_s3_bucket.app_data.id
  
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid    = "DenyInsecureConnections"
        Effect = "Deny"
        Principal = "*"
        Action = "s3:*"
        Resource = [
          "${aws_s3_bucket.app_data.arn}",
          "${aws_s3_bucket.app_data.arn}/*"
        ]
        Condition = {
          Bool = {
            "aws:SecureTransport" = "false"
          }
        }
      },
      {
        Sid    = "AllowApplicationAccess"
        Effect = "Allow"
        Principal = {
          AWS = aws_iam_role.app_role.arn
        }
        Action = [
          "s3:GetObject",
          "s3:PutObject",
          "s3:DeleteObject"
        ]
        Resource = "${aws_s3_bucket.app_data.arn}/*"
      }
    ]
  })
}
HCL

userdata.sh Template-Datei:

#!/bin/bash
# Generated by Terraform on ${formatted_date} at ${formatted_time}
# Project: ${project_name}
# Environment: ${environment}
# Instance: ${instance_id}/${total_instances}

# Set environment variables
export PROJECT_NAME="${project_name}"
export ENVIRONMENT="${environment}"
export AWS_REGION="${region}"
export INSTANCE_ID="${instance_id}"
export DEBUG_MODE="${debug_mode}"
export SSL_ENABLED="${ssl_enabled}"

# Log everything
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1

echo "Starting initialization for ${project_name} instance ${instance_id}/${total_instances}"

# Update system
apt-get update -y
apt-get upgrade -y

# Install required packages
apt-get install -y \
    curl \
    wget \
    unzip \
    jq \
    awscli \
    docker.io \
    docker-compose

# Configure Docker
systemctl enable docker
systemctl start docker
usermod -a -G docker ubuntu

# Create application directory
mkdir -p /opt/${project_name}
cd /opt/${project_name}

# Create application configuration
cat > config.json << 'EOF'
{
  "project_name": "${project_name}",
  "environment": "${environment}",
  "region": "${region}",
  "instance_id": "${instance_id}",
  "debug_mode": ${debug_mode},
  "ssl_enabled": ${ssl_enabled},
  "storage_size": ${storage_size},
  "database_config": {
    "host": "localhost",
    "port": 3306,
    "name": "${project_name}_${environment}"
  },
  "monitoring": {
    "enabled": true,
    "log_level": "${debug_mode ? "DEBUG" : "INFO"}",
    "metrics_interval": 60
  }
}
EOF

# Environment-specific configuration
%{ if environment == "prod" }
# Production-specific setup
echo "Setting up production environment..."

# SSL certificates
mkdir -p /etc/ssl/certs/${project_name}
# SSL setup would go here

# Performance tuning
echo "net.core.rmem_max = 134217728" >> /etc/sysctl.conf
echo "net.core.wmem_max = 134217728" >> /etc/sysctl.conf
sysctl -p

%{ else }
# Development/staging setup
echo "Setting up development environment..."

# Development tools
apt-get install -y \
    vim \
    htop \
    tree \
    git

# Relaxed security for development
echo "Development mode - relaxed security settings"
%{ endif }

# Install CloudWatch agent
wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
dpkg -i amazon-cloudwatch-agent.deb

# Configure CloudWatch agent
cat > /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json << 'EOF'
{
  "agent": {
    "metrics_collection_interval": 60,
    "run_as_user": "cwagent"
  },
  "logs": {
    "logs_collected": {
      "files": {
        "collect_list": [
          {
            "file_path": "/var/log/user-data.log",
            "log_group_name": "/aws/ec2/${project_name}-${environment}/user-data",
            "log_stream_name": "{instance_id}/user-data"
          },
          {
            "file_path": "/opt/${project_name}/logs/application.log",
            "log_group_name": "/aws/ec2/${project_name}-${environment}/application",
            "log_stream_name": "{instance_id}/application"
          }
        ]
      }
    }
  },
  "metrics": {
    "namespace": "${project_name}/${environment}",
    "metrics_collected": {
      "cpu": {
        "measurement": [
          "cpu_usage_idle",
          "cpu_usage_iowait",
          "cpu_usage_user",
          "cpu_usage_system"
        ],
        "metrics_collection_interval": 60
      },
      "disk": {
        "measurement": [
          "used_percent"
        ],
        "metrics_collection_interval": 60,
        "resources": [
          "*"
        ]
      },
      "mem": {
        "measurement": [
          "mem_used_percent"
        ],
        "metrics_collection_interval": 60
      }
    }
  }
}
EOF

# Start CloudWatch agent
systemctl enable amazon-cloudwatch-agent
systemctl start amazon-cloudwatch-agent

# Final status
echo "Initialization completed for ${project_name} instance ${instance_id}/${total_instances}"
echo "Environment: ${environment}"
echo "Debug mode: ${debug_mode}"
echo "SSL enabled: ${ssl_enabled}"
echo "Timestamp: $(date)"
Bash

Häufige String-Interpolation-Fehler:

ProblemSymptomLösung
Nicht-escaped AnführungszeichenError: Invalid JSON\" verwenden
Falsche Template-SyntaxError: Invalid template${} für Variablen
Circular ReferenceError: Cycle detectedAbhängigkeiten prüfen
Type-MismatchError: Invalid valueExplizite Konvertierung
Kommentare und Code-Formatierung

Warum sind Kommentare in Infrastructure Code entscheidend? Terraform-Konfigurationen werden oft von Teams verwaltet und über Jahre hinweg gepflegt. Gute Kommentare erklären nicht nur was gemacht wird, sondern auch warum bestimmte Entscheidungen getroffen wurden. Das ist besonders wichtig bei komplexen Netzwerk-Konfigurationen oder Security-Einstellungen.

Worauf solltest du bei Kommentaren achten? Kommentare können veralten und dann irreführend sein. Halte sie aktuell und fokussiere dich auf das „Warum“ statt auf das „Was“. Das „Was“ ist aus dem Code ersichtlich.

Wofür verwendest du verschiedene Kommentar-Stile? Einzeilige Kommentare für kurze Erklärungen, mehrzeilige für komplexe Architektur-Entscheidungen und Dokumentations-Blöcke für Module und wichtige Ressourcen.

# ============================================================================
# MULTI-TIER ARCHITECTURE CONFIGURATION
# ============================================================================
# This configuration implements a three-tier architecture pattern:
# 1. Presentation Tier (Public Subnets) - Load Balancers, Bastion Hosts
# 2. Application Tier (Private Subnets) - Application Servers, APIs
# 3. Data Tier (Database Subnets) - RDS, ElastiCache, Data Storage
#
# Architecture Decision: We use separate subnets for each tier to implement
# defense-in-depth security and enable granular network access controls.
#
# Last Updated: 2024-01-15
# Author: DevOps Team
# Review Date: 2024-04-15
# ============================================================================

# VPC Configuration
# Why /16 CIDR: Provides 65,536 IP addresses, sufficient for enterprise growth
# Why enable_dns_hostnames: Required for RDS endpoint resolution
resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr  # Default: 10.0.0.0/16
  enable_dns_hostnames = true
  enable_dns_support   = true
  
  tags = {
    Name = "${var.project_name}-vpc"
    # Important: This tag is used by AWS Load Balancer Controller
    "kubernetes.io/cluster/${var.project_name}" = "shared"
  }
}

# ============================================================================
# PRESENTATION TIER (PUBLIC SUBNETS)
# ============================================================================
# Public subnets host internet-facing resources like:
# - Application Load Balancers
# - NAT Gateways
# - Bastion Hosts (if needed)
#
# Security Consideration: Only resources that need direct internet access
# should be placed here. Application servers go in private subnets.
# ============================================================================

# Public Subnets - One per Availability Zone
# Why count.index: Ensures subnets are distributed across AZs for high availability
# Why cidrsubnet: Automatically calculates subnet CIDR blocks
resource "aws_subnet" "public" {
  count = length(var.availability_zones)
  
  vpc_id                  = aws_vpc.main.id
  cidr_block              = cidrsubnet(var.vpc_cidr, 8, count.index)
  availability_zone       = var.availability_zones[count.index]
  map_public_ip_on_launch = true  # Required for NAT Gateway EIPs
  
  tags = {
    Name = "${var.project_name}-public-${count.index + 1}"
    Type = "public"
    Tier = "presentation"
    # Kubernetes requires this tag for subnet discovery
    "kubernetes.io/role/elb" = "1"
  }
}

# Internet Gateway
# Why dependency: VPC must exist before IGW attachment
resource "aws_internet_gateway" "main" {
  vpc_id = aws_vpc.main.id
  
  tags = {
    Name = "${var.project_name}-igw"
  }
}

# Public Route Table
# Why single route table: All public subnets need identical routing
resource "aws_route_table" "public" {
  vpc_id = aws_vpc.main.id
  
  # Route all traffic to Internet Gateway
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.main.id
  }
  
  tags = {
    Name = "${var.project_name}-public-rt"
  }
}

# Associate public subnets with route table
resource "aws_route_table_association" "public" {
  count = length(aws_subnet.public)
  
  subnet_id      = aws_subnet.public[count.index].id
  route_table_id = aws_route_table.public.id
}

# ============================================================================
# APPLICATION TIER (PRIVATE SUBNETS)
# ============================================================================
# Private subnets host application servers and internal services:
# - Web servers (behind ALB)
# - API servers
# - Background workers
# - Internal services
#
# Security Consideration: These resources cannot be directly accessed from
# the internet. They receive traffic through the load balancer and can
# initiate outbound connections through NAT Gateway.
# ============================================================================

# Private Subnets - Application Tier
# Why separate from public: Defense-in-depth security architecture
# Why +10 offset: Avoids IP conflicts with public subnets
resource "aws_subnet" "private_app" {
  count = length(var.availability_zones)
  
  vpc_id            = aws_vpc.main.id
  cidr_block        = cidrsubnet(var.vpc_cidr, 8, count.index + 10)
  availability_zone = var.availability_zones[count.index]
  
  tags = {
    Name = "${var.project_name}-private-app-${count.index + 1}"
    Type = "private"
    Tier = "application"
    # Kubernetes requires this tag for internal load balancers
    "kubernetes.io/role/internal-elb" = "1"
  }
}

# ============================================================================
# DATA TIER (DATABASE SUBNETS)
# ============================================================================
# Database subnets are the most restrictive tier:
# - RDS instances
# - ElastiCache clusters
# - Data storage services
#
# Security Consideration: These subnets have no internet access whatsoever.
# They only communicate with application tier through specific ports.
# ============================================================================

# Database Subnets - Most restrictive tier
# Why +20 offset: Clear separation from other tiers
# Why no internet access: Databases should never communicate with internet
resource "aws_subnet" "private_db" {
  count = length(var.availability_zones)
  
  vpc_id            = aws_vpc.main.id
  cidr_block        = cidrsubnet(var.vpc_cidr, 8, count.index + 20)
  availability_zone = var.availability_zones[count.index]
  
  tags = {
    Name = "${var.project_name}-private-db-${count.index + 1}"
    Type = "private"
    Tier = "data"
  }
}

# ============================================================================
# NAT GATEWAY CONFIGURATION
# ============================================================================
# NAT Gateways enable outbound internet access for private subnets
# 
# Architecture Decision: One NAT Gateway per AZ for high availability
# Cost Consideration: NAT Gateways are expensive. For cost optimization,
# consider using a single NAT Gateway (less resilient but cheaper).
# ============================================================================

# Elastic IPs for NAT Gateways
# Why depends_on: EIPs require IGW to be attached first
resource "aws_eip" "nat" {
  count = var.multi_az_nat ? length(var.availability_zones) : 1
  
  domain = "vpc"
  depends_on = [aws_internet_gateway.main]
  
  tags = {
    Name = "${var.project_name}-nat-eip-${count.index + 1}"
  }
}

# NAT Gateways
# Why public subnet: NAT Gateway needs internet access to route traffic
# Why conditional count: Supports both single and multi-AZ configurations
resource "aws_nat_gateway" "main" {
  count = var.multi_az_nat ? length(var.availability_zones) : 1
  
  allocation_id = aws_eip.nat[count.index].id
  subnet_id     = aws_subnet.public[count.index].id
  
  depends_on = [aws_internet_gateway.main]
  
  tags = {
    Name = "${var.project_name}-nat-${count.index + 1}"
  }
}

# ============================================================================
# PRIVATE SUBNET ROUTING
# ============================================================================
# Private subnets need different routing tables because:
# 1. They route internet traffic through NAT Gateway
# 2. Each AZ may have its own NAT Gateway for high availability
# 3. Database subnets may have different routing rules
# ============================================================================

# Route Tables for Private Application Subnets
# Why per-AZ: Each AZ routes through its own NAT Gateway
resource "aws_route_table" "private_app" {
  count = length(var.availability_zones)
  
  vpc_id = aws_vpc.main.id
  
  # Route internet traffic through NAT Gateway
  # Why conditional: Supports both single and multi-AZ NAT configurations
  route {
    cidr_block     = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.main[var.multi_az_nat ? count.index : 0].id
  }
  
  tags = {
    Name = "${var.project_name}-private-app-rt-${count.index + 1}"
  }
}

# Associate private app subnets with route tables
resource "aws_route_table_association" "private_app" {
  count = length(aws_subnet.private_app)
  
  subnet_id      = aws_subnet.private_app[count.index].id
  route_table_id = aws_route_table.private_app[count.index].id
}

# Route Tables for Database Subnets
# Why separate: Database subnets may need different routing rules
# Why no internet route: Databases should never access internet directly
resource "aws_route_table" "private_db" {
  count = length(var.availability_zones)
  
  vpc_id = aws_vpc.main.id
  
  # No internet route - databases stay internal
  # Future: Add routes for VPC endpoints, peering connections
  
  tags = {
    Name = "${var.project_name}-private-db-rt-${count.index + 1}"
  }
}

# Associate database subnets with route tables
resource "aws_route_table_association" "private_db" {
  count = length(aws_subnet.private_db)
  
  subnet_id      = aws_subnet.private_db[count.index].id
  route_table_id = aws_route_table.private_db[count.index].id
}

# ============================================================================
# SECURITY GROUPS
# ============================================================================
# Security groups implement stateful firewall rules at the instance level
# Following principle of least privilege - only allow required traffic
# ============================================================================

# Application Load Balancer Security Group
# Why port 80/443: Standard web traffic
# Why 0.0.0.0/0: Internet-facing load balancer needs public access
resource "aws_security_group" "alb" {
  name_prefix = "${var.project_name}-alb-"
  description = "Security group for Application Load Balancer"
  vpc_id      = aws_vpc.main.id
  
  # HTTP ingress from internet
  ingress {
    description = "HTTP from internet"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  
  # HTTPS ingress from internet
  ingress {
    description = "HTTPS from internet"
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  
  # All outbound traffic (for health checks)
  egress {
    description = "All outbound traffic"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  
  tags = {
    Name = "${var.project_name}-alb-sg"
  }
  
  # Prevent accidental deletion
  lifecycle {
    create_before_destroy = true
  }
}

# Application Servers Security Group
# Why reference ALB SG: Only allow traffic from load balancer
# Why port 8080: Common application port (adjust as needed)
resource "aws_security_group" "app_servers" {
  name_prefix = "${var.project_name}-app-"
  description = "Security group for application servers"
  vpc_id      = aws_vpc.main.id
  
  # HTTP from ALB only
  ingress {
    description     = "HTTP from ALB"
    from_port       = 8080
    to_port         = 8080
    protocol        = "tcp"
    security_groups = [aws_security_group.alb.id]
  }
  
  # SSH from bastion host (if exists)
  # TODO: Implement bastion host security group
  dynamic "ingress" {
    for_each = var.enable_bastion ? [1] : []
    content {
      description = "SSH from bastion"
      from_port   = 22
      to_port     = 22
      protocol    = "tcp"
      cidr_blocks = [var.vpc_cidr]  # Only from VPC
    }
  }
  
  # All outbound traffic (for package updates, API calls)
  egress {
    description = "All outbound traffic"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  
  tags = {
    Name = "${var.project_name}-app-sg"
  }
  
  lifecycle {
    create_before_destroy = true
  }
}

# Database Security Group
# Why port 3306: MySQL/MariaDB default port
# Why app servers only: Databases should only accept app connections
resource "aws_security_group" "database" {
  name_prefix = "${var.project_name}-db-"
  description = "Security group for database servers"
  vpc_id      = aws_vpc.main.id
  
  # MySQL/MariaDB from app servers only
  ingress {
    description     = "MySQL from app servers"
    from_port       = 3306
    to_port         = 3306
    protocol        = "tcp"
    security_groups = [aws_security_group.app_servers.id]
  }
  
  # PostgreSQL support (if needed)
  dynamic "ingress" {
    for_each = var.database_engine == "postgres" ? [1] : []
    content {
      description     = "PostgreSQL from app servers"
      from_port       = 5432
      to_port         = 5432
      protocol        = "tcp"
      security_groups = [aws_security_group.app_servers.id]
    }
  }
  
  # No outbound rules needed for databases
  # RDS manages its own outbound communication
  
  tags = {
    Name = "${var.project_name}-db-sg"
  }
  
  lifecycle {
    create_before_destroy = true
  }
}

/*
 * ARCHITECTURE SUMMARY
 * ====================
 * This configuration creates a secure, scalable three-tier architecture:
 * 
 * 1. Public Subnets (Presentation Tier)
 *    - Hosts internet-facing load balancers
 *    - Has direct internet access via Internet Gateway
 *    - Routes: 0.0.0.0/0 -> Internet Gateway
 * 
 * 2. Private App Subnets (Application Tier)
 *    - Hosts application servers and APIs
 *    - Internet access via NAT Gateway for updates
 *    - Routes: 0.0.0.0/0 -> NAT Gateway
 * 
 * 3. Private DB Subnets (Data Tier)
 *    - Hosts databases and data storage
 *    - No internet access at all
 *    - Routes: Local traffic only
 * 
 * Security Groups implement defense-in-depth:
 * - ALB: Accepts internet traffic on 80/443
 * - App Servers: Only accept traffic from ALB
 * - Database: Only accepts traffic from app servers
 * 
 * High Availability:
 * - Resources deployed across multiple AZs
 * - Optional multi-AZ NAT Gateways
 * - Database subnets ready for Multi-AZ RDS
 */
HCL

🔧 Code-Formatierung mit terraform fmt:

# Vor terraform fmt (schlecht formatiert)
resource "aws_security_group" "web" {
name="web-sg"
vpc_id=aws_vpc.main.id
ingress{
from_port=80
to_port=80
protocol="tcp"
cidr_blocks=["0.0.0.0/0"]
}
}

# Nach terraform fmt (gut formatiert)
resource "aws_security_group" "web" {
  name   = "web-sg"
  vpc_id = aws_vpc.main.id
  
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}
Bash

Kommentar-Best-Practices:

Kommentar-TypVerwendungBeispiel
Einzeilig (#)Kurze Erklärungen# SSH access from bastion
Mehrzeilig (/* */)Architektur-DokumentationBlock-Kommentare
Header-BlöckeAbschnitts-Trennung# ====== SECTION ======
InlineSpezifische Werteport = 8080 # Application port
TODO/FIXMEZukünftige Änderungen# TODO: Implement bastion host

💡 Formatierung-Tipps:

┌ Verwende terraform fmt regelmäßig für konsistente Formatierung
├ Nutze aussagekräftige Einrückungen für bessere Lesbarkeit
├ Gruppiere verwandte Ressourcen mit Kommentar-Blöcken
└ Dokumentiere Architektur-Entscheidungen ausführlich

⚠️ Kommentar-Stolperfallen:

┌ Veraltete Kommentare sind schlimmer als keine Kommentare
├ Zu viele Kommentare machen den Code unlesbar
├ Kommentare sollten das "Warum" erklären, nicht das "Was"
└ Geheimnisse oder Passwörter niemals in Kommentaren

Debugging-Techniken für komplexe Syntax:

# Terraform-Konfiguration validieren
terraform validate

# Formatierung prüfen
terraform fmt -check -diff

# Syntax-Highlighting in vim
vim -c "syntax on" -c "set ft=terraform" main.tf

# Lokale Werte debuggen
terraform console
> local.current_config
> local.resource_naming.prefix

# Template-Funktionen testen
terraform console
> templatefile("userdata.sh", {var = "test"})
Bash

Die erweiterte HCL-Syntax ermöglicht es dir, komplexe, wartbare Terraform-Konfigurationen zu erstellen. Mit Objects, Sets und Maps strukturierst du deine Daten logisch. String-Interpolation und Templates machen deine Konfigurationen dynamisch. Gute Kommentare und Formatierung sorgen für Wartbarkeit im Team.

Fortgeschrittene Variablen und Outputs

Was macht fortgeschrittene Variablen und Outputs aus? In den Grundlagen hast du einfache Variablen-Definitionen kennengelernt. Jetzt geht es um erweiterte Features wie komplexe Validierungen, sensitive Daten-Handling und conditional Outputs. Diese Funktionen verwandeln deine Terraform-Konfiguration von statischen Definitionen in intelligente, selbstvalidierende Infrastruktur-Templates.

Warum sind erweiterte Variablen-Features entscheidend? Ohne robuste Validierung und strukturierte Outputs entstehen Laufzeitfehler, die in produktiven Umgebungen katastrophal sein können. Erweiterte Features fangen Fehler bereits zur Planungszeit ab und machen deine Infrastruktur vorhersagbarer und sicherer.

Worauf musst du bei fortgeschrittenen Variablen achten? Überkomplexe Validierungen können die Performance beeinträchtigen und die Konfiguration schwer verständlich machen. Sensitive Variablen erfordern besondere Sorgfalt im Umgang mit State-Files und Logs.

Wofür verwendest du erweiterte Variablen-Features? Für Enterprise-Umgebungen, wo Compliance-Anforderungen strenge Validierung erfordern, für Multi-Team-Projekte mit komplexen Abhängigkeiten und für Infrastrukturen, die sensible Daten verwalten müssen.

Input Variables mit komplexen Validierungen

Was sind komplexe Validierungen? Validierungen gehen über einfache Typ-Checks hinaus. Sie prüfen Geschäftslogik, Compliance-Anforderungen und technische Constraints. Du kannst Regex-Pattern, Bereiche, Listen-Inhalte und sogar externe API-Responses validieren.

Warum sind Validierungen so wichtig? Sie verhindern kostspielige Fehler in der Produktion. Eine falsche CIDR-Block-Konfiguration kann Netzwerk-Ausfälle verursachen. Ungültige Instance-Types können zu Performance-Problemen führen. Validierungen fangen diese Probleme vor der Anwendung ab.

Worauf solltest du bei Validierungen achten? Validierungen laufen bei jedem Plan und Apply. Komplexe Validierungen können die Performance beeinträchtigen. Fehlermeldungen sollten verständlich und handlungsorientiert sein.

Wofür verwendest du komplexe Validierungen? Für Compliance-Checks, Sicherheits-Validierungen, Kosten-Kontrolle und technische Constraints.

🔧 Praktisches Beispiel – Erweiterte Variable-Validierung:

# Erweiterte Netzwerk-Konfiguration mit komplexen Validierungen
variable "network_config" {
  description = "Complete network configuration with validation"
  type = object({
    vpc_cidr = string
    environment = string
    availability_zones = list(string)
    public_subnets = list(string)
    private_subnets = list(string)
    database_subnets = list(string)
    enable_nat_gateway = bool
    enable_vpn_gateway = bool
    dns_domain = string
    tags = map(string)
  })
  
  # CIDR-Block-Validierung
  validation {
    condition = can(cidrhost(var.network_config.vpc_cidr, 0))
    error_message = "VPC CIDR must be a valid IPv4 CIDR block (e.g., 10.0.0.0/16)."
  }
  
  # CIDR-Größen-Validierung
  validation {
    condition = tonumber(split("/", var.network_config.vpc_cidr)[1]) >= 16 && tonumber(split("/", var.network_config.vpc_cidr)[1]) <= 24
    error_message = "VPC CIDR netmask must be between /16 and /24 for proper subnet allocation."
  }
  
  # Private IP-Bereiche-Validierung (RFC 1918)
  validation {
    condition = can(regex("^(10\\.|172\\.(1[6-9]|2[0-9]|3[0-1])\\.|192\\.168\\.)", var.network_config.vpc_cidr))
    error_message = "VPC CIDR must use private IP ranges (10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16)."
  }
  
  # Environment-Validierung
  validation {
    condition = contains(["dev", "staging", "prod", "sandbox"], var.network_config.environment)
    error_message = "Environment must be one of: dev, staging, prod, sandbox."
  }
  
  # Availability Zone-Validierung
  validation {
    condition = length(var.network_config.availability_zones) >= 2 && length(var.network_config.availability_zones) <= 6
    error_message = "Must specify between 2 and 6 availability zones for high availability."
  }
  
  # Subnet-Anzahl-Validierung
  validation {
    condition = length(var.network_config.public_subnets) == length(var.network_config.availability_zones)
    error_message = "Number of public subnets must match number of availability zones."
  }
  
  # Subnet-CIDR-Validierung
  validation {
    condition = alltrue([
      for subnet in var.network_config.public_subnets : 
      can(cidrsubnet(var.network_config.vpc_cidr, 4, 0)) && 
      can(cidrhost(subnet, 0))
    ])
    error_message = "All public subnet CIDRs must be valid and within VPC CIDR range."
  }
  
  # DNS-Domain-Validierung
  validation {
    condition = can(regex("^[a-z0-9.-]+\\.[a-z]{2,}$", var.network_config.dns_domain))
    error_message = "DNS domain must be a valid domain name (e.g., example.com)."
  }
  
  # Tags-Validierung
  validation {
    condition = alltrue([
      for key, value in var.network_config.tags : 
      can(regex("^[A-Za-z0-9._-]{1,128}$", key)) && 
      can(regex("^[A-Za-z0-9._\\s-]{1,256}$", value))
    ])
    error_message = "Tag keys and values must follow AWS naming conventions."
  }
}

# Erweiterte Compute-Konfiguration mit Business-Logic-Validierung
variable "compute_config" {
  description = "Compute configuration with business logic validation"
  type = object({
    instance_types = map(string)
    min_instances = number
    max_instances = number
    desired_instances = number
    auto_scaling_enabled = bool
    spot_instances_enabled = bool
    spot_max_price = string
    health_check_type = string
    health_check_grace_period = number
    termination_policies = list(string)
  })
  
  # Instance-Type-Validierung
  validation {
    condition = alltrue([
      for type in values(var.compute_config.instance_types) : 
      can(regex("^[a-z][0-9]+[a-z]*\\.(nano|micro|small|medium|large|xlarge|[0-9]+xlarge)$", type))
    ])
    error_message = "Instance types must be valid AWS instance types (e.g., t3.micro, m5.large)."
  }
  
  # Scaling-Limits-Validierung
  validation {
    condition = var.compute_config.min_instances <= var.compute_config.desired_instances && var.compute_config.desired_instances <= var.compute_config.max_instances
    error_message = "Scaling configuration must follow: min_instances <= desired_instances <= max_instances."
  }
  
  # Spot-Price-Validierung
  validation {
    condition = var.compute_config.spot_instances_enabled ? can(tonumber(var.compute_config.spot_max_price)) && tonumber(var.compute_config.spot_max_price) > 0 : true
    error_message = "Spot max price must be a positive number when spot instances are enabled."
  }
  
  # Health-Check-Validierung
  validation {
    condition = contains(["EC2", "ELB"], var.compute_config.health_check_type)
    error_message = "Health check type must be 'EC2' or 'ELB'."
  }
  
  # Termination-Policy-Validierung
  validation {
    condition = alltrue([
      for policy in var.compute_config.termination_policies : 
      contains(["OldestInstance", "NewestInstance", "OldestLaunchConfiguration", "ClosestToNextInstanceHour", "Default"], policy)
    ])
    error_message = "Termination policies must be valid AWS Auto Scaling termination policies."
  }
}

# Erweiterte Datenbank-Konfiguration mit Compliance-Validierung
variable "database_config" {
  description = "Database configuration with compliance validation"
  type = object({
    engine = string
    engine_version = string
    instance_class = string
    allocated_storage = number
    max_storage = number
    storage_type = string
    storage_encrypted = bool
    kms_key_id = string
    backup_retention_period = number
    backup_window = string
    maintenance_window = string
    multi_az = bool
    publicly_accessible = bool
    deletion_protection = bool
    performance_insights_enabled = bool
    monitoring_interval = number
    auto_minor_version_upgrade = bool
    parameter_group_family = string
    option_group_name = string
  })
  
  # Engine-Validierung
  validation {
    condition = contains(["mysql", "postgres", "mariadb", "oracle-ee", "oracle-se2", "sqlserver-ee", "sqlserver-se", "sqlserver-ex", "sqlserver-web"], var.database_config.engine)
    error_message = "Database engine must be a supported RDS engine."
  }
  
  # Storage-Validierung
  validation {
    condition = var.database_config.allocated_storage >= 20 && var.database_config.allocated_storage <= 65536
    error_message = "Allocated storage must be between 20 GB and 65,536 GB."
  }
  
  # Storage-Type-Validierung
  validation {
    condition = contains(["gp2", "gp3", "io1", "io2", "magnetic"], var.database_config.storage_type)
    error_message = "Storage type must be gp2, gp3, io1, io2, or magnetic."
  }
  
  # Backup-Retention-Validierung
  validation {
    condition = var.database_config.backup_retention_period >= 0 && var.database_config.backup_retention_period <= 35
    error_message = "Backup retention period must be between 0 and 35 days."
  }
  
  # Backup-Window-Format-Validierung
  validation {
    condition = can(regex("^[0-2][0-9]:[0-5][0-9]-[0-2][0-9]:[0-5][0-9]$", var.database_config.backup_window))
    error_message = "Backup window must be in format HH:MM-HH:MM (e.g., 03:00-04:00)."
  }
  
  # Maintenance-Window-Format-Validierung
  validation {
    condition = can(regex("^(sun|mon|tue|wed|thu|fri|sat):[0-2][0-9]:[0-5][0-9]-(sun|mon|tue|wed|thu|fri|sat):[0-2][0-9]:[0-5][0-9]$", var.database_config.maintenance_window))
    error_message = "Maintenance window must be in format ddd:HH:MM-ddd:HH:MM (e.g., sun:04:00-sun:05:00)."
  }
  
  # Compliance-Validierung: Verschlüsselung erforderlich für Produktion
  validation {
    condition = var.database_config.storage_encrypted == true
    error_message = "Storage encryption is required for all database instances for compliance."
  }
  
  # Compliance-Validierung: Deletion Protection für Produktion
  validation {
    condition = var.database_config.deletion_protection == true
    error_message = "Deletion protection must be enabled for all database instances."
  }
  
  # Compliance-Validierung: Kein öffentlicher Zugriff
  validation {
    condition = var.database_config.publicly_accessible == false
    error_message = "Database instances must not be publicly accessible for security compliance."
  }
  
  # Monitoring-Interval-Validierung
  validation {
    condition = contains([0, 1, 5, 10, 15, 30, 60], var.database_config.monitoring_interval)
    error_message = "Monitoring interval must be 0, 1, 5, 10, 15, 30, or 60 seconds."
  }
}

# Sicherheits-Konfiguration mit erweiterten Validierungen
variable "security_config" {
  description = "Security configuration with advanced validation"
  type = object({
    enable_flow_logs = bool
    enable_cloudtrail = bool
    enable_config = bool
    enable_guardduty = bool
    security_group_rules = list(object({
      type = string
      from_port = number
      to_port = number
      protocol = string
      cidr_blocks = list(string)
      description = string
    }))
    kms_key_rotation_enabled = bool
    password_policy = object({
      minimum_password_length = number
      require_lowercase_characters = bool
      require_uppercase_characters = bool
      require_numbers = bool
      require_symbols = bool
      max_password_age = number
    })
  })
  
  # Security-Group-Rules-Validierung
  validation {
    condition = alltrue([
      for rule in var.security_config.security_group_rules : 
      contains(["ingress", "egress"], rule.type) &&
      rule.from_port >= 0 && rule.from_port <= 65535 &&
      rule.to_port >= 0 && rule.to_port <= 65535 &&
      rule.from_port <= rule.to_port
    ])
    error_message = "Security group rules must have valid types, ports, and port ranges."
  }
  
  # CIDR-Blocks-Validierung für Security Groups
  validation {
    condition = alltrue([
      for rule in var.security_config.security_group_rules : 
      alltrue([
        for cidr in rule.cidr_blocks : 
        can(cidrhost(cidr, 0))
      ])
    ])
    error_message = "All CIDR blocks in security group rules must be valid."
  }
  
  # Gefährliche Ports-Validierung
  validation {
    condition = alltrue([
      for rule in var.security_config.security_group_rules : 
      !(rule.type == "ingress" && contains(rule.cidr_blocks, "0.0.0.0/0") && 
        (rule.from_port <= 22 && rule.to_port >= 22 || 
         rule.from_port <= 3389 && rule.to_port >= 3389))
    ])
    error_message = "SSH (22) and RDP (3389) ports must not be open to 0.0.0.0/0 for security."
  }
  
  # Password-Policy-Validierung
  validation {
    condition = var.security_config.password_policy.minimum_password_length >= 8 && var.security_config.password_policy.minimum_password_length <= 128
    error_message = "Password minimum length must be between 8 and 128 characters."
  }
  
  # Password-Complexity-Validierung
  validation {
    condition = var.security_config.password_policy.require_lowercase_characters && var.security_config.password_policy.require_uppercase_characters && var.security_config.password_policy.require_numbers && var.security_config.password_policy.require_symbols
    error_message = "Password policy must require all character types for security compliance."
  }
}
HCL

Validierungs-Strategien:

Validierungs-TypAnwendungPerformanceKomplexität
Typ-ValidierungBasis-DatentypenSehr hochNiedrig
Regex-ValidierungFormat-PrüfungHochMittel
Range-ValidierungNumerische BereicheSehr hochNiedrig
List-ValidierungErlaubte WerteHochNiedrig
Business-LogicKomplexe RegelnMittelHoch
Cross-FieldAbhängigkeitenMittelHoch
💡 Validierungs-Optimierung: Verwende einfache Validierungen zuerst, dann komplexe. Das verbessert die Performance und macht Fehlermeldungen verständlicher.

Erweiterte Validierungs-Funktionen:

# Conditional Validation basierend auf anderen Variablen
locals {
  is_production = var.environment == "prod"
  is_development = var.environment == "dev"
}

variable "monitoring_config" {
  description = "Monitoring configuration with conditional validation"
  type = object({
    enabled = bool
    retention_days = number
    detailed_monitoring = bool
    alert_endpoints = list(string)
  })
  
  # Conditional Validation: Monitoring erforderlich für Produktion
  validation {
    condition = var.environment == "prod" ? var.monitoring_config.enabled : true
    error_message = "Monitoring must be enabled for production environments."
  }
  
  # Conditional Validation: Retention für Produktion
  validation {
    condition = var.environment == "prod" ? var.monitoring_config.retention_days >= 90 : var.monitoring_config.retention_days >= 7
    error_message = "Production environments require minimum 90 days retention, others require minimum 7 days."
  }
  
  # Email-Format-Validierung
  validation {
    condition = alltrue([
      for email in var.monitoring_config.alert_endpoints : 
      can(regex("^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$", email))
    ])
    error_message = "All alert endpoints must be valid email addresses."
  }
}

# Cross-Variable-Validierung
variable "scaling_config" {
  description = "Auto scaling configuration"
  type = object({
    min_size = number
    max_size = number
    desired_capacity = number
    target_cpu_utilization = number
  })
  
  # Cross-Field-Validierung
  validation {
    condition = var.scaling_config.min_size <= var.scaling_config.desired_capacity && var.scaling_config.desired_capacity <= var.scaling_config.max_size
    error_message = "Scaling configuration must follow: min_size <= desired_capacity <= max_size."
  }
  
  # Business-Logic-Validierung
  validation {
    condition = var.scaling_config.max_size <= (var.environment == "prod" ? 100 : 10)
    error_message = "Production environments can scale to 100 instances, others are limited to 10."
  }
}
HCL

⚠️ Validierungs-Stolperfallen:

ProblemSymptomLösung
Langsame ValidierungPlan dauert langeEinfache Checks zuerst
Zirkuläre ReferenzenValidation errorAbhängigkeiten prüfen
Unklare FehlermeldungenVerwirrung bei FehlernSpezifische Nachrichten
Über-ValidierungSchwer zu verwendenNur notwendige Checks
Sensitive Variables und Credential-Handling

Was sind sensitive Variablen? Sensitive Variablen enthalten vertrauliche Informationen wie Passwörter, API-Keys oder Zertifikate. Terraform behandelt sie speziell: Sie werden nicht im Plan-Output angezeigt, nicht in Logs geschrieben und in der State-Datei verschleiert.

Warum ist korrektes Credential-Handling kritisch? Falsch behandelte Credentials können zu Sicherheitslücken führen. Passwörter im Plan-Output, API-Keys in Logs oder unverschlüsselte Secrets in State-Files sind häufige Schwachstellen.

Worauf musst du bei sensitive Variablen achten? Sensitive Variablen sind nur in Terraform „sensitive“, nicht in der echten Welt. State-Files müssen verschlüsselt werden, Logs müssen sicher behandelt werden, und Team-Zugriff muss kontrolliert werden.

Wofür verwendest du sensitive Variablen? Für Passwörter, API-Keys, Zertifikate, Tokens und alle anderen vertraulichen Informationen, die in deiner Infrastruktur verwendet werden.

# Erweiterte Sensitive-Variable-Konfiguration
variable "database_credentials" {
  description = "Database credentials - marked as sensitive"
  type = object({
    master_username = string
    master_password = string
    replica_username = string
    replica_password = string
  })
  sensitive = true
  
  validation {
    condition = length(var.database_credentials.master_password) >= 12
    error_message = "Master password must be at least 12 characters long."
  }
  
  validation {
    condition = can(regex("^[A-Za-z0-9!@#$%^&*()_+=-]+$", var.database_credentials.master_password))
    error_message = "Password must contain only allowed characters."
  }
}

# API-Keys und Tokens
variable "external_api_keys" {
  description = "External API keys and tokens"
  type = object({
    github_token = string
    docker_registry_token = string
    monitoring_api_key = string
    backup_service_key = string
  })
  sensitive = true
  
  validation {
    condition = alltrue([
      for key in values(var.external_api_keys) : 
      length(key) >= 8 && length(key) <= 256
    ])
    error_message = "API keys must be between 8 and 256 characters long."
  }
}

# SSL-Zertifikate
variable "ssl_certificates" {
  description = "SSL certificates and private keys"
  type = object({
    certificate_body = string
    private_key = string
    certificate_chain = string
  })
  sensitive = true
  
  validation {
    condition = can(regex("^-----BEGIN CERTIFICATE-----", var.ssl_certificates.certificate_body))
    error_message = "Certificate body must be a valid PEM-encoded certificate."
  }
  
  validation {
    condition = can(regex("^-----BEGIN (RSA )?PRIVATE KEY-----", var.ssl_certificates.private_key))
    error_message = "Private key must be a valid PEM-encoded private key."
  }
}

# Verschlüsselungsschlüssel
variable "encryption_keys" {
  description = "Encryption keys for various services"
  type = object({
    application_secret_key = string
    session_encryption_key = string
    database_encryption_key = string
  })
  sensitive = true
  
  validation {
    condition = alltrue([
      for key in values(var.encryption_keys) : 
      length(key) >= 32
    ])
    error_message = "Encryption keys must be at least 32 characters long."
  }
}
HCL

Sichere Credential-Übergabe:

# Lokale Sensitive-Werte mit Berechnungen
locals {
  # Sichere Passwort-Generierung
  generated_passwords = {
    db_master = random_password.db_master.result
    db_replica = random_password.db_replica.result
    app_secret = random_password.app_secret.result
  }
  
  # Sichere Kombinationen
  database_connection_strings = {
    master = "postgresql://${var.database_credentials.master_username}:${local.generated_passwords.db_master}@${aws_db_instance.master.endpoint}:5432/${aws_db_instance.master.name}"
    replica = "postgresql://${var.database_credentials.replica_username}:${local.generated_passwords.db_replica}@${aws_db_instance.replica.endpoint}:5432/${aws_db_instance.replica.name}"
  }
}

# Sichere Password-Generierung
resource "random_password" "db_master" {
  length  = 16
  special = true
  numeric = true
  upper   = true
  lower   = true
}

resource "random_password" "db_replica" {
  length  = 16
  special = true
  numeric = true
  upper   = true
  lower   = true
}

resource "random_password" "app_secret" {
  length  = 32
  special = true
  numeric = true
  upper   = true
  lower   = true
}

# AWS Secrets Manager Integration
resource "aws_secretsmanager_secret" "database_credentials" {
  name                    = "${var.project_name}-${var.environment}-db-credentials"
  description             = "Database credentials for ${var.project_name}"
  recovery_window_in_days = 7
  
  tags = {
    Name = "${var.project_name}-db-credentials"
    Environment = var.environment
  }
}

resource "aws_secretsmanager_secret_version" "database_credentials" {
  secret_id = aws_secretsmanager_secret.database_credentials.id
  secret_string = jsonencode({
    master_username = var.database_credentials.master_username
    master_password = random_password.db_master.result
    replica_username = var.database_credentials.replica_username
    replica_password = random_password.db_replica.result
    connection_strings = local.database_connection_strings
  })
}

# KMS Key für Verschlüsselung
resource "aws_kms_key" "secrets" {
  description             = "KMS key for ${var.project_name} secrets"
  deletion_window_in_days = 7
  enable_key_rotation     = true
  
  tags = {
    Name = "${var.project_name}-secrets-key"
    Environment = var.environment
  }
}

resource "aws_kms_alias" "secrets" {
  name          = "alias/${var.project_name}-${var.environment}-secrets"
  target_key_id = aws_kms_key.secrets.key_id
}

# RDS-Instanz mit Secrets Manager
resource "aws_db_instance" "master" {
  identifier = "${var.project_name}-${var.environment}-master"
  
  engine         = "postgres"
  engine_version = "13.7"
  instance_class = "db.t3.micro"
  
  allocated_storage     = 20
  max_allocated_storage = 100
  storage_type         = "gp2"
  storage_encrypted    = true
  kms_key_id          = aws_kms_key.secrets.arn
  
  # Credentials aus Secrets Manager
  manage_master_user_password = true
  master_user_secret_kms_key_id = aws_kms_key.secrets.arn
  
  db_name  = replace("${var.project_name}_${var.environment}", "-", "_")
  username = var.database_credentials.master_username
  
  # Sichere Netzwerk-Konfiguration
  db_subnet_group_name   = aws_db_subnet_group.main.name
  vpc_security_group_ids = [aws_security_group.database.id]
  publicly_accessible    = false
  
  # Backup-Konfiguration
  backup_retention_period = 30
  backup_window          = "03:00-04:00"
  maintenance_window     = "sun:04:00-sun:05:00"
  
  # Monitoring
  monitoring_interval = 60
  monitoring_role_arn = aws_iam_role.rds_monitoring.arn
  
  # Schutz vor Löschung
  deletion_protection = true
  skip_final_snapshot = false
  final_snapshot_identifier = "${var.project_name}-${var.environment}-final-snapshot"
  
  tags = {
    Name = "${var.project_name}-${var.environment}-master"
    Environment = var.environment
  }
}
HCL

Credential-Handling-Strategien:

┌─────────────────────────────────────────────────────────────┐
│                Credential Handling Flow                     │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  1. Input (Sensitive Variables)                             │
│     ├─ Environment Variables                                │
│     ├─ Terraform Cloud/Enterprise                           │
│     ├─ HashiCorp Vault                                      │
│     └─ AWS Secrets Manager                                  │
│                    ↓                                        │
│  2. Processing (Terraform Core)                             │
│     ├─ Marked as Sensitive                                  │
│     ├─ Excluded from Logs                                   │
│     ├─ Hidden in Plan Output                                │
│     └─ Encrypted in State                                   │
│                    ↓                                        │
│  3. Storage (State Backend)                                 │
│     ├─ Encrypted at Rest                                    │
│     ├─ Encrypted in Transit                                 │
│     ├─ Access Control                                       │
│     └─ Audit Logging                                        │
│                    ↓                                        │
│  4. Usage (Resources)                                       │
│     ├─ Passed to Resources                                  │
│     ├─ Stored in AWS Secrets Manager                        │
│     ├─ Used for Authentication                              │
│     └─ Rotated Regularly                                    │
│                                                             │
└─────────────────────────────────────────────────────────────┘
Markdown

Best Practices für Credential-Handling:

AspektEmpfehlungBegründung
EingabeUmgebungsvariablen oder VaultNicht in Code speichern
ÜbertragungTLS-verschlüsseltSchutz vor Mitlauschen
SpeicherungEncrypted State BackendSchutz von State-Files
VerwendungSecrets Manager/VaultZentrale Verwaltung
RotationAutomatisiertRegelmäßige Erneuerung
ZugriffLeast PrivilegeMinimale Berechtigungen

🔧 Praktisches Beispiel – Environment-Variable-Setup:

# Sichere Environment-Variable-Konfiguration
export TF_VAR_database_credentials='{
  "master_username": "dbadmin",
  "master_password": "super-secure-password-123!",
  "replica_username": "readonly",
  "replica_password": "another-secure-password-456!"
}'

export TF_VAR_external_api_keys='{
  "github_token": "ghp_abcdef1234567890",
  "docker_registry_token": "dckr_pat_abcdef1234567890",
  "monitoring_api_key": "mon_abcdef1234567890",
  "backup_service_key": "bck_abcdef1234567890"
}'

# Terraform mit sicheren Variablen ausführen
terraform plan
terraform apply
Bash

Sensitive-Variable-Stolperfallen:

ProblemRisikoLösung
Unverschlüsselter StateCredentials im KlartextEncrypted Backend
Logs enthalten SecretsCredential-LeakageLog-Filtering
Plan-Output zeigt SecretsVersehentliche OffenlegungSensitive marking
Hardcoded CredentialsSource-Code-LeakageEnvironment Variables
Output Values mit Conditional Logic

Was sind erweiterte Output-Features? Outputs können mehr als nur Werte zurückgeben. Sie können bedingte Logik enthalten, Werte transformieren, als sensitive markiert werden und komplexe Datenstrukturen ausgeben. Das macht sie zu mächtigen Werkzeugen für Module und Automatisierung.

Warum sind erweiterte Outputs wichtig? Sie machen deine Terraform-Module flexibler und wiederverwendbarer. Conditional Outputs ermöglichen es, verschiedene Informationen basierend auf der Konfiguration zurückzugeben. Das reduziert die Anzahl der benötigten Module-Varianten.

Worauf musst du bei erweiterten Outputs achten? Komplexe Output-Logik kann schwer zu debuggen sein. Sensitive Outputs müssen korrekt markiert werden, um Credential-Leakage zu verhindern. Performance kann bei komplexen Berechnungen leiden.

Wofür verwendest du erweiterte Outputs? Für Multi-Environment-Module, bedingte Ressourcen-Informationen, komplexe Datenstrukturen und Integration mit anderen Tools.

# Erweiterte Output-Konfiguration mit Conditional Logic
output "network_configuration" {
  description = "Complete network configuration details"
  value = {
    # Basis-Informationen
    vpc_id = aws_vpc.main.id
    vpc_cidr = aws_vpc.main.cidr_block
    
    # Bedingte Subnet-Informationen
    public_subnets = var.network_config.public_subnets != [] ? {
      ids = aws_subnet.public[*].id
      cidrs = aws_subnet.public[*].cidr_block
      availability_zones = aws_subnet.public[*].availability_zone
    } : null
    
    private_subnets = var.network_config.private_subnets != [] ? {
      ids = aws_subnet.private_app[*].id
      cidrs = aws_subnet.private_app[*].cidr_block
      availability_zones = aws_subnet.private_app[*].availability_zone
    } : null
    
    database_subnets = var.network_config.database_subnets != [] ? {
      ids = aws_subnet.private_db[*].id
      cidrs = aws_subnet.private_db[*].cidr_block
      availability_zones = aws_subnet.private_db[*].availability_zone
      subnet_group_name = aws_db_subnet_group.main.name
    } : null
    
    # Bedingte Gateway-Informationen
    internet_gateway = length(aws_subnet.public) > 0 ? {
      id = aws_internet_gateway.main.id
      route_table_id = aws_route_table.public.id
    } : null
    
    nat_gateways = var.network_config.enable_nat_gateway ? {
      ids = aws_nat_gateway.main[*].id
      eip_ids = aws_eip.nat[*].id
      public_ips = aws_eip.nat[*].public_ip
    } : null
    
    # VPN-Gateway falls aktiviert
    vpn_gateway = var.network_config.enable_vpn_gateway ? {
      id = aws_vpn_gateway.main[0].id
      amazon_side_asn = aws_vpn_gateway.main[0].amazon_side_asn
    } : null
    
    # DNS-Konfiguration
    dns_configuration = {
      domain_name = var.network_config.dns_domain
      private_zone_id = aws_route53_zone.private.zone_id
      public_zone_id = var.network_config.dns_domain != "" ? aws_route53_zone.public[0].zone_id : null
    }
    
    # Environment-spezifische Informationen
    environment_details = {
      environment = var.network_config.environment
      is_production = var.network_config.environment == "prod"
      monitoring_enabled = var.network_config.environment == "prod" ? true : false
      backup_enabled = var.network_config.environment == "prod" ? true : false
    }
  }
}

# Erweiterte Compute-Outputs
output "compute_configuration" {
  description = "Complete compute configuration details"
  value = {
    # Launch Template-Informationen
    launch_template = {
      id = aws_launch_template.app.id
      latest_version = aws_launch_template.app.latest_version
      name = aws_launch_template.app.name
    }
    
    # Auto Scaling Group-Informationen
    autoscaling_group = var.compute_config.auto_scaling_enabled ? {
      name = aws_autoscaling_group.app[0].name
      arn = aws_autoscaling_group.app[0].arn
      min_size = aws_autoscaling_group.app[0].min_size
      max_size = aws_autoscaling_group.app[0].max_size
      desired_capacity = aws_autoscaling_group.app[0].desired_capacity
      availability_zones = aws_autoscaling_group.app[0].availability_zones
    } : null
    
    # Load Balancer-Informationen (falls vorhanden)
    load_balancer = var.compute_config.load_balancer_enabled ? {
      arn = aws_lb.app[0].arn
      dns_name = aws_lb.app[0].dns_name
      zone_id = aws_lb.app[0].zone_id
      target_group_arn = aws_lb_target_group.app[0].arn
    } : null
    
    # Spot-Instance-Informationen
    spot_configuration = var.compute_config.spot_instances_enabled ? {
      enabled = true
      max_price = var.compute_config.spot_max_price
      instance_types = var.compute_config.instance_types
    } : {
      enabled = false
      max_price = null
      instance_types = var.compute_config.instance_types
    }
    
    # Monitoring-Konfiguration
    monitoring = {
      cloudwatch_log_group = aws_cloudwatch_log_group.app.name
      metrics_namespace = "${var.project_name}/${var.environment}"
      dashboard_url = "https://console.aws.amazon.com/cloudwatch/home?region=${data.aws_region.current.name}#dashboards:name=${var.project_name}-${var.environment}"
    }
  }
}

# Sensitive Database-Outputs
output "database_configuration" {
  description = "Database configuration details (some values are sensitive)"
  value = {
    # Öffentliche Informationen
    instance_identifier = aws_db_instance.master.id
    endpoint = aws_db_instance.master.endpoint
    port = aws_db_instance.master.port
    engine = aws_db_instance.master.engine
    engine_version = aws_db_instance.master.engine_version
    
    # Sicherheits-Konfiguration
    security_configuration = {
      encrypted = aws_db_instance.master.storage_encrypted
      kms_key_id = aws_db_instance.master.kms_key_id
      vpc_security_group_ids = aws_db_instance.master.vpc_security_group_ids
      subnet_group_name = aws_db_instance.master.db_subnet_group_name
    }
    
    # Backup-Konfiguration
    backup_configuration = {
      retention_period = aws_db_instance.master.backup_retention_period
      backup_window = aws_db_instance.master.backup_window
      maintenance_window = aws_db_instance.master.maintenance_window
    }
    
    # Monitoring-Konfiguration
    monitoring_configuration = {
      monitoring_interval = aws_db_instance.master.monitoring_interval
      monitoring_role_arn = aws_db_instance.master.monitoring_role_arn
      performance_insights_enabled = aws_db_instance.master.performance_insights_enabled
    }
    
    # Replica-Informationen (falls vorhanden)
    read_replica = var.database_config.read_replica_enabled ? {
      identifier = aws_db_instance.replica[0].id
      endpoint = aws_db_instance.replica[0].endpoint
      port = aws_db_instance.replica[0].port
    } : null
  }
  
  # Nicht sensitive, da keine Credentials enthalten
  sensitive = false
}

# Sensitive Credential-Outputs
output "database_credentials" {
  description = "Database credentials (sensitive)"
  value = {
    secrets_manager_arn = aws_secretsmanager_secret.database_credentials.arn
    secrets_manager_name = aws_secretsmanager_secret.database_credentials.name
    kms_key_arn = aws_kms_key.secrets.arn
    kms_key_alias = aws_kms_alias.secrets.name
  }
  sensitive = true
}

# Erweiterte Security-Outputs
output "security_configuration" {
  description = "Security configuration details"
  value = {
    # Security Groups
    security_groups = {
      alb = {
        id = aws_security_group.alb.id
        name = aws_security_group.alb.name
        description = aws_security_group.alb.description
      }
      app_servers = {
        id = aws_security_group.app_servers.id
        name = aws_security_group.app_servers.name
        description = aws_security_group.app_servers.description
      }
      database = {
        id = aws_security_group.database.id
        name = aws_security_group.database.name
        description = aws_security_group.database.description
      }
    }
    
    # KMS Keys
    kms_keys = {
      secrets = {
        id = aws_kms_key.secrets.id
        arn = aws_kms_key.secrets.arn
        alias = aws_kms_alias.secrets.name
      }
    }
    
    # Compliance-Status
    compliance_status = {
      encryption_enabled = aws_db_instance.master.storage_encrypted
      vpc_flow_logs_enabled = var.security_config.enable_flow_logs
      cloudtrail_enabled = var.security_config.enable_cloudtrail
      config_enabled = var.security_config.enable_config
      guardduty_enabled = var.security_config.enable_guardduty
    }
    
    # Audit-Informationen
    audit_configuration = {
      cloudtrail_arn = var.security_config.enable_cloudtrail ? aws_cloudtrail.main[0].arn : null
      config_recorder_name = var.security_config.enable_config ? aws_config_configuration_recorder.main[0].name : null
      guardduty_detector_id = var.security_config.enable_guardduty ? aws_guardduty_detector.main[0].id : null
    }
  }
}

# Conditional Outputs für verschiedene Environments
output "environment_specific_outputs" {
  description = "Environment-specific configuration outputs"
  value = var.environment == "prod" ? {
    # Produktions-spezifische Outputs
    type = "production"
    high_availability = true
    backup_enabled = true
    monitoring_level = "detailed"
    
    # Produktions-URLs
    application_url = "https://${var.network_config.dns_domain}"
    admin_url = "https://admin.${var.network_config.dns_domain}"
    api_url = "https://api.${var.network_config.dns_domain}"
    
    # Produktions-Metriken
    metrics_dashboard = "https://console.aws.amazon.com/cloudwatch/home?region=${data.aws_region.current.name}#dashboards:name=${var.project_name}-production"
    cost_dashboard = "https://console.aws.amazon.com/billing/home#/dashboard"
    
    # Produktions-Alerts
    alert_endpoints = var.monitoring_config.alert_endpoints
    escalation_policy = "immediate"
    
  } : {
    # Entwicklungs-spezifische Outputs
    type = "development"
    high_availability = false
    backup_enabled = false
    monitoring_level = "basic"
    
    # Entwicklungs-URLs
    application_url = "http://${aws_lb.app[0].dns_name}"
    admin_url = "http://${aws_lb.app[0].dns_name}/admin"
    api_url = "http://${aws_lb.app[0].dns_name}/api"
    
    # Entwicklungs-Hinweise
    ssh_access = "Available from VPC"
    debug_mode = "enabled"
    auto_shutdown = "enabled"
  }
}

# Aggregierte Outputs für andere Tools
output "terraform_outputs_summary" {
  description = "Summary of all important outputs for external tools"
  value = {
    # Netzwerk-Zusammenfassung
    network = {
      vpc_id = aws_vpc.main.id
      public_subnet_ids = aws_subnet.public[*].id
      private_subnet_ids = aws_subnet.private_app[*].id
      database_subnet_ids = aws_subnet.private_db[*].id
    }
    
    # Sicherheits-Zusammenfassung
    security = {
      alb_security_group_id = aws_security_group.alb.id
      app_security_group_id = aws_security_group.app_servers.id
      db_security_group_id = aws_security_group.database.id
    }
    
    # Compute-Zusammenfassung
    compute = {
      launch_template_id = aws_launch_template.app.id
      autoscaling_group_name = var.compute_config.auto_scaling_enabled ? aws_autoscaling_group.app[0].name : null
      load_balancer_dns = var.compute_config.load_balancer_enabled ? aws_lb.app[0].dns_name : null
    }
    
    # Database-Zusammenfassung
    database = {
      endpoint = aws_db_instance.master.endpoint
      port = aws_db_instance.master.port
      secrets_manager_arn = aws_secretsmanager_secret.database_credentials.arn
    }
    
    # Monitoring-Zusammenfassung
    monitoring = {
      log_group_name = aws_cloudwatch_log_group.app.name
      metrics_namespace = "${var.project_name}/${var.environment}"
    }
  }
}
HCL

Output-Debugging-Techniken:

# Alle Outputs anzeigen
terraform output

# Spezifischen Output anzeigen
terraform output network_configuration

# Raw Output ohne Formatierung
terraform output -raw database_credentials

# JSON-Format für maschinelle Verarbeitung
terraform output -json | jq .

# Sensitive Outputs debuggen (vorsichtig verwenden)
terraform output -json | jq '.database_credentials.value'
Bash

Output-Performance-Optimierung:

OptimierungBeschreibungAnwendung
Lazy EvaluationOutputs nur berechnen wenn abgerufenKomplexe Berechnungen
CachingBerechnete Werte in localsWiederverwendete Werte
Conditional LogicNur notwendige OutputsUmgebungsabhängige Daten
Structured DataGruppierte OutputsBessere Organisation

💡 Output-Best-Practices:

┌ Verwende aussagekräftige Beschreibungen
├ Gruppiere verwandte Outputs in Objekten
├ Markiere sensitive Outputs korrekt
├ Nutze Conditional Logic für Flexibilität
└ Dokumentiere komplexe Output-Strukturen
Lokale Werte (Locals) für komplexe Berechnungen

Was sind lokale Werte (Locals)? Locals sind berechnete Werte, die innerhalb einer Terraform-Konfiguration definiert und wiederverwendet werden können. Sie funktionieren wie Variablen, werden aber zur Laufzeit berechnet und können komplexe Ausdrücke, Funktionen und Conditional Logic enthalten. Locals sind der Schlüssel für saubere, wartbare Terraform-Konfigurationen.

Warum sind Locals so wichtig? Sie eliminieren Code-Duplikation, verbessern die Performance durch Caching von Berechnungen und machen komplexe Logik lesbar. Ohne Locals würdest du dieselben komplexen Ausdrücke an mehreren Stellen wiederholen, was zu Fehlern und schlechter Wartbarkeit führt.

Worauf musst du bei Locals achten? Locals werden bei jedem Plan und Apply neu berechnet. Komplexe Berechnungen können die Performance beeinträchtigen. Zirkuläre Abhängigkeiten zwischen Locals führen zu Fehlern. Die Reihenfolge der Definition ist wichtig.

Wofür verwendest du Locals? Für komplexe Namensgebung, bedingte Konfigurationen, Daten-Transformationen, Tag-Generierung und überall, wo du komplexe Logik wiederverwenden musst.

# Erweiterte Locals-Konfiguration für komplexe Infrastruktur
locals {
  # ============================================================================
  # BASIS-BERECHNUNGEN UND METADATEN
  # ============================================================================
  
  # Aktuelle Zeit und Datum für Tagging
  current_timestamp = timestamp()
  current_date      = formatdate("YYYY-MM-DD", local.current_timestamp)
  current_time      = formatdate("hh:mm:ss", local.current_timestamp)
  
  # Git-Informationen (falls verfügbar)
  git_commit = try(file("${path.module}/.git/refs/heads/main"), "unknown")
  git_branch = try(trimspace(file("${path.module}/.git/HEAD")), "unknown")
  
  # Environment-spezifische Einstellungen
  environment_config = {
    dev = {
      is_production = false
      backup_retention = 7
      monitoring_level = "basic"
      instance_count = 1
      storage_size = 20
      enable_logging = true
      enable_encryption = false
      cost_optimization = true
    }
    staging = {
      is_production = false
      backup_retention = 14
      monitoring_level = "standard"
      instance_count = 2
      storage_size = 50
      enable_logging = true
      enable_encryption = true
      cost_optimization = true
    }
    prod = {
      is_production = true
      backup_retention = 30
      monitoring_level = "detailed"
      instance_count = 3
      storage_size = 100
      enable_logging = true
      enable_encryption = true
      cost_optimization = false
    }
  }
  
  # Aktuelle Environment-Konfiguration
  current_env = local.environment_config[var.environment]
  
  # ============================================================================
  # ERWEITERTE NAMENSGEBUNG UND TAGGING
  # ============================================================================
  
  # Basis-Naming-Konventionen
  naming_convention = {
    # Basis-Präfix für alle Ressourcen
    prefix = "${var.project_name}-${var.environment}"
    
    # Spezifische Namen für verschiedene Ressourcen-Typen
    vpc_name = "${var.project_name}-${var.environment}-vpc"
    subnet_prefix = "${var.project_name}-${var.environment}"
    sg_prefix = "${var.project_name}-${var.environment}"
    
    # Erweiterte Namen mit Region und AZ
    regional_prefix = "${var.project_name}-${var.environment}-${data.aws_region.current.name}"
    
    # Namen mit Hash für Eindeutigkeit
    unique_suffix = substr(md5("${var.project_name}-${var.environment}-${local.current_timestamp}"), 0, 8)
    unique_prefix = "${var.project_name}-${var.environment}-${local.unique_suffix}"
  }
  
  # Erweiterte Tag-Generierung
  base_tags = {
    # Projekt-Informationen
    Project     = var.project_name
    Environment = var.environment
    Region      = data.aws_region.current.name
    
    # Deployment-Informationen
    ManagedBy     = "terraform"
    DeployedAt    = local.current_timestamp
    DeployedDate  = local.current_date
    TerraformWorkspace = terraform.workspace
    
    # Git-Informationen (falls verfügbar)
    GitCommit = local.git_commit
    GitBranch = local.git_branch
    
    # Environment-spezifische Tags
    IsProduction    = local.current_env.is_production
    BackupPolicy    = "${local.current_env.backup_retention}days"
    MonitoringLevel = local.current_env.monitoring_level
    CostOptimized   = local.current_env.cost_optimization
  }
  
  # Ressourcen-spezifische Tags
  compute_tags = merge(local.base_tags, {
    ResourceType = "compute"
    AutoShutdown = local.current_env.cost_optimization
    InstanceCount = local.current_env.instance_count
  })
  
  storage_tags = merge(local.base_tags, {
    ResourceType = "storage"
    Encrypted = local.current_env.enable_encryption
    BackupEnabled = true
    StorageSize = "${local.current_env.storage_size}GB"
  })
  
  network_tags = merge(local.base_tags, {
    ResourceType = "network"
    VPCFlowLogs = local.current_env.enable_logging
  })
  
  # ============================================================================
  # NETZWERK-BERECHNUNGEN
  # ============================================================================
  
  # Automatische CIDR-Berechnung für Subnets
  vpc_cidr = var.vpc_cidr
  
  # Berechne Subnet-CIDRs automatisch basierend auf AZs
  availability_zones = data.aws_availability_zones.available.names
  az_count = length(local.availability_zones)
  
  # Public Subnets (erste X Subnets)
  public_subnet_cidrs = [
    for i in range(local.az_count) :
    cidrsubnet(local.vpc_cidr, 8, i)
  ]
  
  # Private App Subnets (nächste X Subnets)
  private_app_subnet_cidrs = [
    for i in range(local.az_count) :
    cidrsubnet(local.vpc_cidr, 8, i + 10)
  ]
  
  # Database Subnets (letzte X Subnets)
  database_subnet_cidrs = [
    for i in range(local.az_count) :
    cidrsubnet(local.vpc_cidr, 8, i + 20)
  ]
  
  # Subnet-Konfigurationen für verschiedene Tiers
  subnet_configurations = {
    public = {
      name_prefix = "${local.naming_convention.subnet_prefix}-public"
      cidrs = local.public_subnet_cidrs
      map_public_ip = true
      route_table_type = "public"
      tags = merge(local.network_tags, {
        SubnetType = "public"
        InternetAccess = "direct"
      })
    }
    
    private_app = {
      name_prefix = "${local.naming_convention.subnet_prefix}-private-app"
      cidrs = local.private_app_subnet_cidrs
      map_public_ip = false
      route_table_type = "private"
      tags = merge(local.network_tags, {
        SubnetType = "private"
        InternetAccess = "nat"
        Tier = "application"
      })
    }
    
    database = {
      name_prefix = "${local.naming_convention.subnet_prefix}-private-db"
      cidrs = local.database_subnet_cidrs
      map_public_ip = false
      route_table_type = "database"
      tags = merge(local.network_tags, {
        SubnetType = "private"
        InternetAccess = "none"
        Tier = "data"
      })
    }
  }
  
  # ============================================================================
  # SICHERHEITS-KONFIGURATIONEN
  # ============================================================================
  
  # Security Group-Regeln basierend auf Environment
  security_group_rules = {
    # ALB Security Group
    alb = {
      name = "${local.naming_convention.sg_prefix}-alb"
      description = "Security group for Application Load Balancer"
      ingress_rules = [
        {
          description = "HTTP from internet"
          from_port = 80
          to_port = 80
          protocol = "tcp"
          cidr_blocks = ["0.0.0.0/0"]
        },
        {
          description = "HTTPS from internet"
          from_port = 443
          to_port = 443
          protocol = "tcp"
          cidr_blocks = ["0.0.0.0/0"]
        }
      ]
      egress_rules = [
        {
          description = "All outbound traffic"
          from_port = 0
          to_port = 0
          protocol = "-1"
          cidr_blocks = ["0.0.0.0/0"]
        }
      ]
    }
    
    # Application Servers Security Group
    app = {
      name = "${local.naming_convention.sg_prefix}-app"
      description = "Security group for application servers"
      ingress_rules = concat(
        [
          {
            description = "HTTP from ALB"
            from_port = 8080
            to_port = 8080
            protocol = "tcp"
            cidr_blocks = local.private_app_subnet_cidrs
          }
        ],
        # SSH-Zugriff nur in Development
        local.current_env.is_production == false ? [
          {
            description = "SSH from VPC"
            from_port = 22
            to_port = 22
            protocol = "tcp"
            cidr_blocks = [local.vpc_cidr]
          }
        ] : []
      )
      egress_rules = [
        {
          description = "All outbound traffic"
          from_port = 0
          to_port = 0
          protocol = "-1"
          cidr_blocks = ["0.0.0.0/0"]
        }
      ]
    }
    
    # Database Security Group
    database = {
      name = "${local.naming_convention.sg_prefix}-db"
      description = "Security group for database servers"
      ingress_rules = [
        {
          description = "MySQL from app servers"
          from_port = 3306
          to_port = 3306
          protocol = "tcp"
          cidr_blocks = local.private_app_subnet_cidrs
        }
      ]
      egress_rules = []  # Keine ausgehenden Regeln für Datenbanken
    }
  }
  
  # ============================================================================
  # COMPUTE-KONFIGURATIONEN
  # ============================================================================
  
  # Instance-Konfiguration basierend auf Environment
  instance_configurations = {
    # Launch Template-Konfiguration
    launch_template = {
      name_prefix = "${local.naming_convention.prefix}-app"
      image_id = data.aws_ami.ubuntu.id
      instance_type = local.current_env.is_production ? "t3.large" : "t3.micro"
      
      # User Data mit komplexen Template-Variablen
      user_data_vars = {
        project_name = var.project_name
        environment = var.environment
        region = data.aws_region.current.name
        
        # Environment-spezifische Variablen
        debug_mode = !local.current_env.is_production
        log_level = local.current_env.is_production ? "INFO" : "DEBUG"
        monitoring_enabled = local.current_env.monitoring_level != "basic"
        
        # Database-Konfiguration
        database_host = "localhost"  # Wird später durch RDS ersetzt
        database_port = 3306
        database_name = replace("${var.project_name}_${var.environment}", "-", "_")
        
        # Application-Konfiguration
        app_port = 8080
        health_check_path = "/health"
        metrics_enabled = local.current_env.monitoring_level == "detailed"
        
        # Berechnete Werte
        instance_role = "application"
        deployment_timestamp = local.current_timestamp
      }
      
      # Block Device Mappings
      block_device_mappings = [
        {
          device_name = "/dev/sda1"
          volume_size = local.current_env.storage_size
          volume_type = "gp3"
          encrypted = local.current_env.enable_encryption
          iops = local.current_env.is_production ? 3000 : 1000
        }
      ]
      
      # Instance-Tags
      tags = merge(local.compute_tags, {
        Name = "${local.naming_convention.prefix}-app"
        InstanceProfile = "application"
      })
    }
    
    # Auto Scaling Group-Konfiguration
    autoscaling_group = {
      name = "${local.naming_convention.prefix}-app-asg"
      min_size = local.current_env.is_production ? 2 : 1
      max_size = local.current_env.is_production ? 10 : 3
      desired_capacity = local.current_env.instance_count
      
      # Health Check-Konfiguration
      health_check_type = "ELB"
      health_check_grace_period = 300
      
      # Termination Policies
      termination_policies = ["OldestInstance"]
      
      # Scaling Policies
      target_group_arns = []  # Wird später gesetzt
      
      tags = merge(local.compute_tags, {
        Name = "${local.naming_convention.prefix}-app-asg"
        AutoScalingGroup = "application"
      })
    }
  }
  
  # ============================================================================
  # DATABASE-KONFIGURATIONEN
  # ============================================================================
  
  # RDS-Konfiguration basierend auf Environment
  database_configurations = {
    master = {
      identifier = "${local.naming_convention.prefix}-master"
      engine = "mysql"
      engine_version = "8.0"
      instance_class = local.current_env.is_production ? "db.t3.medium" : "db.t3.micro"
      
      # Storage-Konfiguration
      allocated_storage = local.current_env.storage_size
      max_allocated_storage = local.current_env.storage_size * 2
      storage_type = "gp3"
      storage_encrypted = local.current_env.enable_encryption
      
      # Backup-Konfiguration
      backup_retention_period = local.current_env.backup_retention
      backup_window = "03:00-04:00"
      maintenance_window = "sun:04:00-sun:05:00"
      
      # Monitoring-Konfiguration
      monitoring_interval = local.current_env.monitoring_level == "detailed" ? 15 : 60
      performance_insights_enabled = local.current_env.monitoring_level == "detailed"
      
      # Multi-AZ für Produktion
      multi_az = local.current_env.is_production
      
      # Database-Name
      db_name = replace("${var.project_name}_${var.environment}", "-", "_")
      
      # Parameter Group
      parameter_group_family = "mysql8.0"
      
      tags = merge(local.storage_tags, {
        Name = "${local.naming_convention.prefix}-master"
        DatabaseRole = "master"
        Engine = "mysql"
      })
    }
    
    # Read Replica nur für Produktion
    replica = local.current_env.is_production ? {
      identifier = "${local.naming_convention.prefix}-replica"
      replicate_source_db = "${local.naming_convention.prefix}-master"
      instance_class = "db.t3.medium"
      
      # Monitoring für Replica
      monitoring_interval = 60
      performance_insights_enabled = false
      
      tags = merge(local.storage_tags, {
        Name = "${local.naming_convention.prefix}-replica"
        DatabaseRole = "replica"
        Engine = "mysql"
      })
    } : null
  }
  
  # ============================================================================
  # MONITORING UND LOGGING-KONFIGURATIONEN
  # ============================================================================
  
  # CloudWatch-Konfiguration
  cloudwatch_configurations = {
    # Log Groups
    log_groups = {
      application = {
        name = "/aws/ec2/${local.naming_convention.prefix}/application"
        retention_days = local.current_env.is_production ? 90 : 7
        tags = merge(local.base_tags, {
          LogType = "application"
        })
      }
      
      system = {
        name = "/aws/ec2/${local.naming_convention.prefix}/system"
        retention_days = local.current_env.is_production ? 30 : 7
        tags = merge(local.base_tags, {
          LogType = "system"
        })
      }
      
      access = {
        name = "/aws/elb/${local.naming_convention.prefix}/access"
        retention_days = local.current_env.is_production ? 180 : 14
        tags = merge(local.base_tags, {
          LogType = "access"
        })
      }
    }
    
    # CloudWatch Alarms
    alarms = local.current_env.monitoring_level != "basic" ? {
      high_cpu = {
        alarm_name = "${local.naming_convention.prefix}-high-cpu"
        comparison_operator = "GreaterThanThreshold"
        evaluation_periods = "2"
        metric_name = "CPUUtilization"
        namespace = "AWS/EC2"
        period = "120"
        statistic = "Average"
        threshold = "80"
        alarm_description = "This metric monitors ec2 cpu utilization"
        alarm_actions = []  # SNS-Topics werden später hinzugefügt
      }
      
      high_memory = {
        alarm_name = "${local.naming_convention.prefix}-high-memory"
        comparison_operator = "GreaterThanThreshold"
        evaluation_periods = "2"
        metric_name = "MemoryUtilization"
        namespace = "CWAgent"
        period = "120"
        statistic = "Average"
        threshold = "85"
        alarm_description = "This metric monitors memory utilization"
        alarm_actions = []
      }
    } : {}
  }
  
  # ============================================================================
  # BERECHNETE OUTPUT-STRUKTUREN
  # ============================================================================
  
  # Berechnete Outputs für andere Module
  computed_outputs = {
    # Netzwerk-Zusammenfassung
    network_summary = {
      vpc_id = var.vpc_id  # Wird durch echte VPC-ID ersetzt
      vpc_cidr = local.vpc_cidr
      availability_zones = local.availability_zones
      subnet_count = local.az_count
      
      # Subnet-Informationen
      public_subnets = {
        count = local.az_count
        cidrs = local.public_subnet_cidrs
      }
      
      private_app_subnets = {
        count = local.az_count
        cidrs = local.private_app_subnet_cidrs
      }
      
      database_subnets = {
        count = local.az_count
        cidrs = local.database_subnet_cidrs
      }
    }
    
    # Compute-Zusammenfassung
    compute_summary = {
      instance_type = local.instance_configurations.launch_template.instance_type
      min_instances = local.instance_configurations.autoscaling_group.min_size
      max_instances = local.instance_configurations.autoscaling_group.max_size
      desired_instances = local.instance_configurations.autoscaling_group.desired_capacity
      storage_size = "${local.current_env.storage_size}GB"
      encrypted = local.current_env.enable_encryption
    }
    
    # Database-Zusammenfassung
    database_summary = {
      engine = local.database_configurations.master.engine
      engine_version = local.database_configurations.master.engine_version
      instance_class = local.database_configurations.master.instance_class
      storage_size = "${local.database_configurations.master.allocated_storage}GB"
      backup_retention = "${local.database_configurations.master.backup_retention_period}days"
      multi_az = local.database_configurations.master.multi_az
      has_replica = local.database_configurations.replica != null
    }
    
    # Environment-Zusammenfassung
    environment_summary = {
      environment = var.environment
      is_production = local.current_env.is_production
      monitoring_level = local.current_env.monitoring_level
      cost_optimized = local.current_env.cost_optimization
      backup_retention = local.current_env.backup_retention
      encryption_enabled = local.current_env.enable_encryption
    }
  }
}
HCL

Performance-Optimierung für Locals:

# Performance-optimierte Locals-Strategien
locals {
  # ✅ Gut: Einfache Berechnungen cachen
  vpc_cidr_base = split("/", var.vpc_cidr)[0]
  vpc_cidr_prefix = split("/", var.vpc_cidr)[1]
  
  # ✅ Gut: Komplexe Listen-Operationen cachen
  filtered_availability_zones = [
    for az in data.aws_availability_zones.available.names :
    az if length(regexall("us-west-2[abc]", az)) > 0
  ]
  
  # ✅ Gut: Conditional Logic cachen
  production_settings = var.environment == "prod" ? {
    instance_count = 5
    backup_retention = 30
    monitoring = "detailed"
  } : {
    instance_count = 1
    backup_retention = 7
    monitoring = "basic"
  }
  
  # ⚠️ Vorsicht: Sehr komplexe Berechnungen
  # Diese sollten nur verwendet werden, wenn wirklich nötig
  complex_subnet_calculation = {
    for i, az in local.filtered_availability_zones :
    az => {
      public_cidr = cidrsubnet(var.vpc_cidr, 8, i)
      private_cidr = cidrsubnet(var.vpc_cidr, 8, i + 10)
      database_cidr = cidrsubnet(var.vpc_cidr, 8, i + 20)
      route_table_id = "rt-${md5("${az}-${i}")}"
    }
  }
}
HCL

Locals-Organisationsstrategien:

┌─────────────────────────────────────────────────────────────┐
│                Locals Organization Strategy                 │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  1. BASIS-KONFIGURATION                                     │
│     ├─ Environment Settings                                 │
│     ├─ Naming Conventions                                   │
│     ├─ Common Tags                                          │
│     └─ Timestamps                                           │
│                                                             │
│  2. BERECHNETE WERTE                                        │
│     ├─ Network CIDRs                                        │
│     ├─ Resource Counts                                      │
│     ├─ Conditional Logic                                    │
│     └─ Transformations                                      │
│                                                             │
│  3. KONFIGURATIONEN                                         │
│     ├─ Security Groups                                      │
│     ├─ Launch Templates                                     │
│     ├─ Database Settings                                    │
│     └─ Monitoring                                           │
│                                                             │
│  4. OUTPUT-STRUKTUREN                                       │
│     ├─ Summaries                                            │
│     ├─ Exports                                              │
│     └─ Integration Data                                     │
│                                                             │
└─────────────────────────────────────────────────────────────┘
Markdown

Locals-Best-Practices:

Best PracticeBeschreibungBeispiel
Logische GruppierungVerwandte Berechnungen zusammenfassenNetwork, Compute, Database
Aussagekräftige NamenSelbsterklärende Local-Namencurrent_env statt env
Kommentare für KomplexitätKomplexe Logik dokumentierenWarum bestimmte Berechnungen
Performance beachtenSchwere Berechnungen cachenEinmal berechnen, oft verwenden
Abhängigkeiten verwaltenZirkuläre Referenzen vermeidenKlare Hierarchie

🔧 Praktisches Beispiel – Locals in echten Ressourcen:

# Verwendung der Locals in echten Ressourcen
resource "aws_subnet" "public" {
  count = length(local.availability_zones)
  
  vpc_id                  = aws_vpc.main.id
  cidr_block              = local.subnet_configurations.public.cidrs[count.index]
  availability_zone       = local.availability_zones[count.index]
  map_public_ip_on_launch = local.subnet_configurations.public.map_public_ip
  
  tags = merge(
    local.subnet_configurations.public.tags,
    {
      Name = "${local.subnet_configurations.public.name_prefix}-${count.index + 1}"
      AvailabilityZone = local.availability_zones[count.index]
    }
  )
}

resource "aws_launch_template" "app" {
  name_prefix   = local.instance_configurations.launch_template.name_prefix
  image_id      = local.instance_configurations.launch_template.image_id
  instance_type = local.instance_configurations.launch_template.instance_type
  
  vpc_security_group_ids = [aws_security_group.app.id]
  
  # Verwendung der berechneten User Data-Variablen
  user_data = base64encode(templatefile("${path.module}/userdata.sh", 
    local.instance_configurations.launch_template.user_data_vars
  ))
  
  # Verwendung der berechneten Block Device Mappings
  dynamic "block_device_mappings" {
    for_each = local.instance_configurations.launch_template.block_device_mappings
    content {
      device_name = block_device_mappings.value.device_name
      ebs {
        volume_size = block_device_mappings.value.volume_size
        volume_type = block_device_mappings.value.volume_type
        encrypted   = block_device_mappings.value.encrypted
        iops        = block_device_mappings.value.iops
      }
    }
  }
  
  tag_specifications {
    resource_type = "instance"
    tags = local.instance_configurations.launch_template.tags
  }
}
HCL

💡 Locals-Performance-Tipps:

┌ Verwende Locals für Werte, die mehrfach verwendet werden
├ Cache komplexe for-expressions in Locals
├ Vermeide tiefe Verschachtelungen in Locals
└ Nutze separate Locals-Blöcke für bessere Organisation

⚠️ Locals-Stolperfallen:

ProblemSymptomLösung
Zirkuläre AbhängigkeitenError: Cycle in local valuesAbhängigkeiten neu strukturieren
Performance-ProblemeLangsame Plan-ZeitenKomplexe Berechnungen optimieren
Unlesbare KomplexitätSchwer verständliche LocalsIn kleinere Teile aufteilen
Reihenfolge-ProblemeError: Reference to undeclaredLocal-Definitionen ordnen

Locals-Debugging-Techniken:

# Locals-Werte in terraform console anzeigen
terraform console
> local.current_env
> local.naming_convention.prefix
> local.computed_outputs.network_summary

# Locals-Werte in Plan anzeigen
terraform plan | grep "local\."

# Locals-Abhängigkeiten analysieren
terraform graph | grep local
Bash

Mit diesen erweiterten Variablen- und Output-Features schreibst du nicht nur funktionalen, sondern intelligenten Terraform-Code. Komplexe Validierungen fangen Fehler ab, bevor sie kostspielig werden, sensitive Variables schützen deine Credentials, Locals eliminieren Duplikation und machen komplexe Logik wartbar, während erweiterte Outputs deine Module flexibel und wiederverwendbar machen. Diese Techniken sind der Unterschied zwischen einfachen Terraform-Skripten und professionellen, produktionstauglichen Infrastruktur-Definitionen.

Built-in Funktionen und Expressions

Was sind Built-in Funktionen in Terraform? Built-in Funktionen sind vorgefertigte Werkzeuge, die in HCL integriert sind und komplexe Operationen ermöglichen. Sie verwandeln statische Konfigurationen in dynamische, intelligente Infrastruktur-Definitionen. Terraform bietet über 100 Built-in Funktionen für String-Manipulation, Mathematik, Collection-Verarbeitung und Conditional Logic.

Warum sind Built-in Funktionen entscheidend? Sie eliminieren die Notwendigkeit für externes Scripting und machen deine Terraform-Konfigurationen selbstständig. Ohne diese Funktionen müsstest du komplexe Logik in externe Tools auslagern, was zu fragilen, schwer wartbaren Setups führt. Built-in Funktionen halten alles in Terraform und machen deine Infrastruktur vorhersagbar.

Worauf musst du bei Built-in Funktionen achten? Funktionen werden bei jedem Plan und Apply ausgeführt. Komplexe Verschachtelungen können die Performance beeinträchtigen und die Lesbarkeit reduzieren. Nicht alle Funktionen sind in allen Terraform-Versionen verfügbar. Fehlende Input-Validierung kann zu Laufzeitfehlern führen.

Wofür verwendest du Built-in Funktionen? Für dynamische Namensgebung, Daten-Transformation, Conditional Logic, Collection-Verarbeitung, JSON/YAML-Generierung und komplexe Berechnungen, die zur Infrastruktur-Erstellung benötigt werden.

String-Manipulation und Formatierung

Was bietet Terraform für String-Operationen? Terraform verfügt über umfassende String-Funktionen für Formatierung, Manipulation, Validierung und Transformation. Diese Funktionen ermöglichen es dir, dynamische Konfigurationen zu erstellen, die sich an verschiedene Umgebungen anpassen können.

Warum ist String-Manipulation so wichtig? In Terraform arbeitest du ständig mit Strings – Resource-Namen, Tags, URLs, Pfade und Konfigurationswerte. Professionelle String-Manipulation macht deine Konfigurationen flexibel und wartbar. Ohne sie entstehen statische, schwer änderbare Definitionen.

Worauf solltest du bei String-Funktionen achten? String-Operationen sind case-sensitive. Leere Strings und null-Werte können zu unerwarteten Ergebnissen führen. Regex-Pattern müssen korrekt escaped werden. Performance kann bei großen String-Operationen leiden.

Wofür verwendest du String-Funktionen? Für Resource-Naming, Tag-Generierung, URL-Konstruktion, Path-Manipulation, JSON-Template-Erstellung und Daten-Validation.

🔧 Praktisches Beispiel – Erweiterte String-Funktionen:

# Umfassende String-Manipulation für Infrastruktur-Naming
locals {
  # Basis-Variablen für String-Operationen
  project_name = "e-commerce-platform"
  environment  = "production"
  region       = "us-east-1"
  team_name    = "Platform Engineering"
  
  # ============================================================================
  # STRING-FORMATIERUNG UND -TRANSFORMATION
  # ============================================================================
  
  # Basis-String-Operationen
  string_operations = {
    # Case-Manipulation
    project_upper = upper(local.project_name)                    # "E-COMMERCE-PLATFORM"
    project_lower = lower(local.project_name)                    # "e-commerce-platform"
    project_title = title(replace(local.project_name, "-", " ")) # "E Commerce Platform"
    
    # String-Bereinigung für verschiedene Kontexte
    dns_safe_name = replace(lower(local.project_name), "_", "-")              # DNS-konforme Namen
    db_safe_name  = replace(replace(local.project_name, "-", "_"), ".", "_")  # Database-konforme Namen
    s3_safe_name  = replace(lower(local.project_name), "_", "-")              # S3-konforme Namen
    
    # Längen-Validierung und -Anpassung
    truncated_name = substr(local.project_name, 0, 10)           # "e-commerce" (erste 10 Zeichen)
    padded_name    = format("%-20s", local.project_name)         # Links-ausgerichtet auf 20 Zeichen
    
    # String-Kombinationen
    full_prefix = join("-", [local.project_name, local.environment, local.region])
    compact_prefix = join("", [substr(local.project_name, 0, 3), local.environment, substr(local.region, -1, 1)])
  }
  
  # Erweiterte Naming-Konventionen mit String-Funktionen
  naming_patterns = {
    # Standard-Ressourcen-Namen
    vpc_name = format("%s-%s-vpc", local.string_operations.dns_safe_name, local.environment)
    
    # Subnet-Namen mit Index-Formatierung
    subnet_pattern = format("%s-%s-subnet-%%02d", local.string_operations.dns_safe_name, local.environment)
    
    # Security Group-Namen mit Rolle
    sg_pattern = format("%s-%s-%%s-sg", local.string_operations.dns_safe_name, local.environment)
    
    # Database-Namen (Unterstriche für SQL-Kompatibilität)
    database_name = format("%s_%s_db", local.string_operations.db_safe_name, local.environment)
    
    # S3-Bucket-Namen (global eindeutig)
    bucket_pattern = format("%s-%s-%%s-%s", 
      local.string_operations.s3_safe_name, 
      local.environment, 
      formatdate("YYYY-MM", timestamp())
    )
    
    # CloudWatch-Log-Gruppen
    log_group_pattern = format("/aws/%%s/%s-%s", local.string_operations.dns_safe_name, local.environment)
  }
  
  # ============================================================================
  # ERWEITERTE STRING-VALIDIERUNG UND -BEREINIGUNG
  # ============================================================================
  
  # String-Validierung mit regex
  validation_results = {
    # DNS-Namen-Validierung
    is_valid_dns = can(regex("^[a-z0-9]([a-z0-9-]{0,61}[a-z0-9])?$", local.string_operations.dns_safe_name))
    
    # E-Mail-Validierung
    admin_email = "admin@company.com"
    is_valid_email = can(regex("^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$", local.validation_results.admin_email))
    
    # IP-Adress-Validierung
    vpc_cidr = "10.0.0.0/16"
    is_valid_cidr = can(cidrhost(local.validation_results.vpc_cidr, 0))
    
    # Passwort-Komplexität-Prüfung
    temp_password = "TempPass123!"
    has_upper = can(regex("[A-Z]", local.validation_results.temp_password))
    has_lower = can(regex("[a-z]", local.validation_results.temp_password))
    has_digit = can(regex("[0-9]", local.validation_results.temp_password))
    has_special = can(regex("[!@#$%^&*()_+-=]", local.validation_results.temp_password))
    is_complex_password = (
      local.validation_results.has_upper && 
      local.validation_results.has_lower && 
      local.validation_results.has_digit && 
      local.validation_results.has_special &&
      length(local.validation_results.temp_password) >= 8
    )
  }
  
  # String-Template-Funktionen
  template_strings = {
    # Erweiterte Formatierung mit mehreren Variablen
    resource_description = format(
      "Managed by Terraform for %s project in %s environment, deployed to %s region by %s team",
      local.project_name,
      local.environment,
      local.region,
      local.team_name
    )
    
    # JSON-Template für Policies
    policy_template = jsonencode({
      Version = "2012-10-17"
      Statement = [
        {
          Effect = "Allow"
          Action = [
            "s3:GetObject",
            "s3:PutObject"
          ]
          Resource = format("arn:aws:s3:::%s/*", format(local.naming_patterns.bucket_pattern, "data"))
          Condition = {
            StringEquals = {
              "s3:x-amz-server-side-encryption" = "AES256"
            }
          }
        }
      ]
    })
    
    # YAML-Template für User Data
    user_data_yaml = yamlencode({
      users = [
        {
          name = "ubuntu"
          sudo = "ALL=(ALL) NOPASSWD:ALL"
          shell = "/bin/bash"
          ssh_authorized_keys = [
            format("ssh-rsa %s %s@%s", 
              "AAAAB3NzaC1yc2EAAAADAQABAAABAQC...", 
              "admin", 
              replace(local.project_name, "-", "")
            )
          ]
        }
      ]
      packages = [
        "curl",
        "wget",
        "unzip",
        "docker.io"
      ]
      runcmd = [
        format("echo 'Project: %s' >> /etc/motd", local.project_name),
        format("echo 'Environment: %s' >> /etc/motd", local.environment),
        "systemctl enable docker",
        "systemctl start docker"
      ]
    })
  }
  
  # ============================================================================
  # URL UND PATH-MANIPULATION
  # ============================================================================
  
  # URL-Konstruktion für verschiedene Services
  service_urls = {
    # Application URLs
    app_base_url = format("https://%s.%s.example.com", local.environment, replace(local.project_name, "-", ""))
    api_url = format("%s/api/v1", local.service_urls.app_base_url)
    admin_url = format("%s/admin", local.service_urls.app_base_url)
    
    # Monitoring URLs
    grafana_url = format("https://grafana-%s.monitoring.example.com", local.environment)
    prometheus_url = format("https://prometheus-%s.monitoring.example.com", local.environment)
    
    # Dashboard URLs mit URL-Encoding
    cloudwatch_dashboard = format(
      "https://console.aws.amazon.com/cloudwatch/home?region=%s#dashboards:name=%s",
      local.region,
      urlencode(format("%s-%s-dashboard", local.project_name, local.environment))
    )
  }
  
  # Path-Manipulation für verschiedene Kontexte
  file_paths = {
    # Linux-Pfade für Konfiguration
    app_config_dir = format("/etc/%s", replace(local.project_name, "-", "_"))
    app_log_dir = format("/var/log/%s", replace(local.project_name, "-", "_"))
    app_data_dir = format("/opt/%s/data", replace(local.project_name, "-", "_"))
    
    # Relative Pfade für Module
    module_path = format("./modules/%s", local.environment)
    template_path = format("${path.module}/templates/%s", local.environment)
    
    # S3-Pfade für Backups
    backup_prefix = format("backups/%s/%s", local.environment, formatdate("YYYY/MM/DD", timestamp()))
    log_prefix = format("logs/%s/%s", local.environment, formatdate("YYYY/MM/DD", timestamp()))
  }
}

# Praktische Anwendung der String-Funktionen in Ressourcen
resource "aws_s3_bucket" "data" {
  bucket = format(local.naming_patterns.bucket_pattern, "data")
  
  tags = {
    Name = format(local.naming_patterns.bucket_pattern, "data")
    Description = local.template_strings.resource_description
    Environment = local.environment
    Project = local.project_name
  }
}

resource "aws_s3_bucket_object" "config" {
  bucket = aws_s3_bucket.data.bucket
  key    = format("%s/config.yaml", local.file_paths.backup_prefix)
  content = local.template_strings.user_data_yaml
  
  tags = {
    ConfigType = "user-data"
    CreatedAt = formatdate("YYYY-MM-DD hh:mm:ss ZZZ", timestamp())
  }
}

# CloudWatch Log Group mit String-Manipulation
resource "aws_cloudwatch_log_group" "application" {
  name              = format(local.naming_patterns.log_group_pattern, "application")
  retention_in_days = local.environment == "prod" ? 90 : 7
  
  tags = {
    Name = format("Logs for %s", title(replace(local.project_name, "-", " ")))
    LogType = "application"
  }
}
HCL

String-Funktionen-Kategorien:

KategorieFunktionenAnwendungsfallPerformance
Case-Manipulationupper(), lower(), title()Naming-KonventionenSehr hoch
String-Operationensubstr(), replace(), trim()Text-TransformationHoch
Formatierungformat(), formatdate()Template-GenerierungHoch
Validationregex(), can()Input-ValidierungMittel
Encodingurlencode(), base64encode()Daten-ÜbertragungHoch

💡 String-Performance-Tipps:

┌ Cache komplexe String-Operationen in Locals
├ Verwende format() statt String-Interpolation für bessere Lesbarkeit
├ Validiere Inputs früh mit can() und regex()
└ Nutze trimspace() für User-Input-Bereinigung
Collection-Funktionen (for, filter, merge, flatten)

Was sind Collection-Funktionen? Collection-Funktionen verarbeiten Listen, Maps und Sets. Sie ermöglichen es dir, Daten zu filtern, zu transformieren, zu kombinieren und zu organisieren. Diese Funktionen sind das Herzstück dynamischer Terraform-Konfigurationen.

Warum sind Collection-Funktionen unverzichtbar? Sie eliminieren statische, repetitive Konfigurationen. Statt dutzende ähnliche Ressourcen manuell zu definieren, erstellst du sie dynamisch basierend auf Daten. Das macht deine Infrastruktur skalierbarer und wartbarer.

Worauf musst du bei Collection-Funktionen achten? Performance kann bei großen Collections leiden. Verschachtelte Collection-Operationen können schwer lesbar werden. Type-Konsistenz ist wichtig – vermische nicht verschiedene Datentypen. Null-Werte können zu unerwarteten Ergebnissen führen.

Wofür verwendest du Collection-Funktionen? Für dynamische Ressourcen-Erstellung, Daten-Aggregation, Configuration-Merging, Filtering von Eingaben und Transformation zwischen verschiedenen Datenformaten.

# Erweiterte Collection-Funktionen für dynamische Infrastruktur
locals {
  # ============================================================================
  # BASIS-DATEN FÜR COLLECTION-OPERATIONEN
  # ============================================================================
  
  # Availability Zones-Daten
  all_availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c", "us-east-1d", "us-east-1e", "us-east-1f"]
  
  # Environment-Konfigurationen
  environment_configs = {
    dev = {
      instance_count = 1
      instance_type = "t3.micro"
      storage_size = 20
      backup_enabled = false
      monitoring_level = "basic"
      allowed_cidrs = ["10.0.0.0/16", "172.16.0.0/12"]
    }
    staging = {
      instance_count = 2
      instance_type = "t3.small"
      storage_size = 50
      backup_enabled = true
      monitoring_level = "standard"
      allowed_cidrs = ["10.0.0.0/16", "172.16.0.0/12", "192.168.0.0/16"]
    }
    prod = {
      instance_count = 5
      instance_type = "t3.large"
      storage_size = 100
      backup_enabled = true
      monitoring_level = "detailed"
      allowed_cidrs = ["10.0.0.0/16"]
    }
  }
  
  # Application-Services mit unterschiedlichen Anforderungen
  application_services = {
    frontend = {
      port = 80
      protocol = "HTTP"
      health_check_path = "/health"
      cpu_request = "100m"
      memory_request = "128Mi"
      replicas = 3
      environment_variables = {
        NODE_ENV = "production"
        API_URL = "https://api.example.com"
        CDN_URL = "https://cdn.example.com"
      }
    }
    backend = {
      port = 8080
      protocol = "HTTP"
      health_check_path = "/api/health"
      cpu_request = "500m"
      memory_request = "512Mi"
      replicas = 5
      environment_variables = {
        DATABASE_URL = "postgresql://..."
        REDIS_URL = "redis://..."
        LOG_LEVEL = "info"
      }
    }
    worker = {
      port = 9090
      protocol = "HTTP"
      health_check_path = "/metrics"
      cpu_request = "200m"
      memory_request = "256Mi"
      replicas = 2
      environment_variables = {
        QUEUE_URL = "redis://..."
        WORKER_CONCURRENCY = "10"
        LOG_LEVEL = "debug"
      }
    }
    database = {
      port = 5432
      protocol = "TCP"
      health_check_path = null
      cpu_request = "1000m"
      memory_request = "2Gi"
      replicas = 1
      environment_variables = {
        POSTGRES_DB = "application"
        POSTGRES_USER = "app_user"
        POSTGRES_MAX_CONNECTIONS = "100"
      }
    }
  }
  
  # ============================================================================
  # FOR-EXPRESSIONS FÜR DATEN-TRANSFORMATION
  # ============================================================================
  
  # Erweiterte for-expressions für verschiedene Anwendungsfälle
  for_expression_examples = {
    # Einfache Listen-Transformation
    subnet_cidrs = [
      for i, az in slice(local.all_availability_zones, 0, 3) :
      cidrsubnet("10.0.0.0/16", 8, i)
    ]
    
    # Map-Transformation mit Conditional Logic
    environment_instance_types = {
      for env, config in local.environment_configs :
      env => config.instance_type
      if config.instance_count > 0
    }
    
    # Komplexe Objekt-Transformation
    service_configurations = {
      for service_name, service_config in local.application_services :
      service_name => {
        name = service_name
        port = service_config.port
        replicas = service_config.replicas
        resource_requests = {
          cpu = service_config.cpu_request
          memory = service_config.memory_request
        }
        environment = [
          for key, value in service_config.environment_variables :
          {
            name = key
            value = value
          }
        ]
        health_check = service_config.health_check_path != null ? {
          path = service_config.health_check_path
          port = service_config.port
        } : null
      }
    }
    
    # Nested for-expressions für Multi-Dimensional Data
    all_service_environment_combinations = flatten([
      for env_name, env_config in local.environment_configs : [
        for service_name, service_config in local.application_services : {
          key = "${env_name}-${service_name}"
          environment = env_name
          service = service_name
          instance_count = env_config.instance_count
          service_replicas = service_config.replicas
          total_instances = env_config.instance_count * service_config.replicas
          resource_suffix = "${env_name}-${service_name}"
        }
        if service_name != "database" || env_name == "prod"  # Database nur in Produktion
      ]
    ])
    
    # Conditional for-expressions
    production_services = {
      for service_name, service_config in local.application_services :
      service_name => merge(service_config, {
        replicas = service_config.replicas * 2  # Doppelte Replicas für Produktion
        monitoring_enabled = true
      })
      if var.environment == "prod"
    }
  }
  
  # ============================================================================
  # FILTER-FUNKTIONEN FÜR DATEN-SELEKTION
  # ============================================================================
  
  # Erweiterte Filtering-Operationen
  filtered_data = {
    # Availability Zones filtern (nur die ersten 3)
    selected_azs = slice(local.all_availability_zones, 0, 3)
    
    # Services nach Typ filtern
    web_services = {
      for name, config in local.application_services :
      name => config
      if config.protocol == "HTTP"
    }
    
    # High-Memory Services identifizieren
    memory_intensive_services = [
      for name, config in local.application_services :
      name
      if can(regex("Gi", config.memory_request))
    ]
    
    # Environment-spezifische Allowed CIDRs
    current_allowed_cidrs = lookup(local.environment_configs, var.environment, {}).allowed_cidrs
    
    # Services mit Health Checks
    services_with_health_checks = [
      for name, config in local.application_services :
      {
        name = name
        health_check_path = config.health_check_path
        port = config.port
      }
      if config.health_check_path != null
    ]
    
    # Conditional Service-Konfiguration basierend auf Environment
    environment_specific_services = {
      for name, config in local.application_services :
      name => merge(config, {
        # Entwicklung: Reduzierte Replicas
        replicas = var.environment == "dev" ? 1 : config.replicas
        # Produktion: Erhöhte Ressourcen
        cpu_request = var.environment == "prod" ? "${tonumber(split("m", config.cpu_request)[0]) * 2}m" : config.cpu_request
      })
    }
  }
  
  # ============================================================================
  # MERGE-FUNKTIONEN FÜR DATEN-KOMBINATION
  # ============================================================================
  
  # Erweiterte Merge-Operationen
  merged_configurations = {
    # Basis-Tags mit Environment-spezifischen Tags mergen
    base_tags = {
      ManagedBy = "terraform"
      Project = var.project_name
      CreatedAt = timestamp()
    }
    
    environment_tags = {
      Environment = var.environment
      CostCenter = var.environment == "prod" ? "production" : "development"
      BackupPolicy = var.environment == "prod" ? "daily" : "weekly"
    }
    
    # Finale Tags für alle Ressourcen
    final_tags = merge(
      local.merged_configurations.base_tags,
      local.merged_configurations.environment_tags,
      var.additional_tags  # Externe Tags
    )
    
    # Service-Konfigurationen mit Environment-Overrides mergen
    final_service_configs = {
      for name, config in local.application_services :
      name => merge(
        config,
        lookup(local.environment_configs[var.environment], "service_overrides", {}),
        {
          # Environment-spezifische Anpassungen
          replicas = var.environment == "dev" ? 1 : (
            var.environment == "staging" ? max(1, config.replicas - 1) : config.replicas
          )
          monitoring_enabled = var.environment == "prod"
          log_level = var.environment == "prod" ? "warn" : "debug"
        }
      )
    }
    
    # Network-Konfiguration mergen
    network_config = merge(
      {
        # Basis-Network-Einstellungen
        vpc_cidr = "10.0.0.0/16"
        enable_nat_gateway = true
        enable_vpn_gateway = false
      },
      # Environment-spezifische Network-Einstellungen
      var.environment == "prod" ? {
        enable_nat_gateway = true
        enable_vpn_gateway = true
        enable_flow_logs = true
        enable_dns_hostnames = true
      } : {},
      # Zusätzliche Custom-Einstellungen
      var.custom_network_config
    )
  }
  
  # ============================================================================
  # FLATTEN-FUNKTIONEN FÜR HIERARCHISCHE DATEN
  # ============================================================================
  
  # Komplexe Flatten-Operationen
  flattened_data = {
    # Alle Service-Environment-Variable-Kombinationen flattieren
    all_environment_variables = flatten([
      for service_name, service_config in local.application_services : [
        for var_name, var_value in service_config.environment_variables : {
          service = service_name
          variable_name = var_name
          variable_value = var_value
          full_key = "${service_name}_${var_name}"
        }
      ]
    ])
    
    # Security Group-Regeln für alle Services flattieren
    all_security_group_rules = flatten([
      for service_name, service_config in local.application_services : [
        {
          service = service_name
          type = "ingress"
          from_port = service_config.port
          to_port = service_config.port
          protocol = "tcp"
          description = "Allow ${service_config.protocol} traffic to ${service_name}"
        },
        # Health Check-Port (falls unterschiedlich)
        service_config.health_check_path != null && service_config.port != 80 ? {
          service = service_name
          type = "ingress"
          from_port = 80
          to_port = 80
          protocol = "tcp"
          description = "Allow health check traffic to ${service_name}"
        } : null
      ]
      # Null-Werte herausfiltern
      if service_config.port != null
    ])
    
    # Subnet-Konfigurationen für alle AZs und Tiers flattieren
    all_subnet_configurations = flatten([
      for tier in ["public", "private", "database"] : [
        for i, az in local.filtered_data.selected_azs : {
          name = "${var.project_name}-${var.environment}-${tier}-${i + 1}"
          tier = tier
          availability_zone = az
          cidr_block = cidrsubnet("10.0.0.0/16", 8, 
            tier == "public" ? i : (
              tier == "private" ? i + 10 : i + 20
            )
          )
          map_public_ip = tier == "public"
          route_table_association = tier == "public" ? "public" : "private"
        }
      ]
    ])
    
    # Load Balancer-Target-Konfigurationen flattieren
    load_balancer_targets = flatten([
      for service_name, service_config in local.filtered_data.web_services : [
        for i in range(service_config.replicas) : {
          service = service_name
          instance_index = i
          target_id = "${service_name}-${i}"
          port = service_config.port
          health_check_path = service_config.health_check_path
        }
      ]
    ])
  }
  
  # ============================================================================
  # ERWEITERTE COLLECTION-KOMBINATIONEN
  # ============================================================================
  
  # Komplexe Collection-Operationen kombinieren
  advanced_operations = {
    # Service-Discovery-Konfiguration generieren
    service_discovery_config = {
      for service_name, service_config in local.application_services :
      service_name => {
        name = service_name
        port = service_config.port
        instances = [
          for i in range(service_config.replicas) : {
            id = "${service_name}-${i}"
            address = "10.0.${i + 1}.${index(keys(local.application_services), service_name) + 10}"
            port = service_config.port
            health_check = service_config.health_check_path != null ? {
              http = "http://10.0.${i + 1}.${index(keys(local.application_services), service_name) + 10}:${service_config.port}${service_config.health_check_path}"
            } : null
          }
        ]
        load_balancer = {
          algorithm = "round_robin"
          health_check = service_config.health_check_path != null
        }
      }
    }
    
    # Monitoring-Konfiguration für alle Services
    monitoring_targets = merge([
      for service_name, service_config in local.application_services : {
        for i in range(service_config.replicas) :
        "${service_name}-${i}" => {
          job_name = service_name
          instance = "${service_name}-${i}"
          targets = ["10.0.${i + 1}.${index(keys(local.application_services), service_name) + 10}:${service_config.port}"]
          labels = {
            service = service_name
            environment = var.environment
            replica = tostring(i)
          }
          scrape_interval = "30s"
          metrics_path = "/metrics"
        }
      }
      if service_config.health_check_path != null  # Nur Services mit Health Checks monitoren
    ]...)
    
    # Resource-Quotas basierend auf Service-Anforderungen berechnen
    total_resource_requirements = {
      total_cpu = sum([
        for service_name, service_config in local.application_services :
        tonumber(split("m", service_config.cpu_request)[0]) * service_config.replicas
      ])
      
      total_memory_mb = sum([
        for service_name, service_config in local.application_services :
        tonumber(split("Mi", service_config.memory_request)[0]) * service_config.replicas
      ])
      
      total_instances = sum([
        for service_name, service_config in local.application_services :
        service_config.replicas
      ])
      
      services_count = length(keys(local.application_services))
    }
  }
}
HCL

Collection-Funktionen im Detail:

FunktionZweckInput-TypOutput-TypPerformance
forTransformationList/MapList/MapHoch
filterSelektionListListHoch
mergeKombinationMapsMapSehr hoch
flattenHierarchie-AuflösungList of ListsListMittel
sliceTeilmengeListListSehr hoch
concatVerkettungListsListHoch

🔧 Praktische Anwendung in Ressourcen:

# Dynamische Subnet-Erstellung mit Collection-Funktionen
resource "aws_subnet" "main" {
  for_each = {
    for subnet in local.flattened_data.all_subnet_configurations :
    subnet.name => subnet
  }
  
  vpc_id                  = aws_vpc.main.id
  cidr_block              = each.value.cidr_block
  availability_zone       = each.value.availability_zone
  map_public_ip_on_launch = each.value.map_public_ip
  
  tags = merge(
    local.merged_configurations.final_tags,
    {
      Name = each.value.name
      Tier = each.value.tier
      AvailabilityZone = each.value.availability_zone
    }
  )
}

# Security Group mit dynamischen Regeln
resource "aws_security_group" "services" {
  name_prefix = "${var.project_name}-${var.environment}-services-"
  vpc_id      = aws_vpc.main.id
  
  dynamic "ingress" {
    for_each = {
      for rule in local.flattened_data.all_security_group_rules :
      "${rule.service}-${rule.from_port}" => rule
      if rule.type == "ingress" && rule != null
    }
    
    content {
      from_port   = ingress.value.from_port
      to_port     = ingress.value.to_port
      protocol    = ingress.value.protocol
      cidr_blocks = ["10.0.0.0/16"]
      description = ingress.value.description
    }
  }
  
  tags = local.merged_configurations.final_tags
}
HCL

💡 Collection-Performance-Optimierungen:

┌ Verwende for_each statt count für bessere State-Verwaltung
├ Cache komplexe Collection-Operationen in Locals
├ Vermeide tiefe Verschachtelungen von for-expressions
└ Nutze slice() für große Listen statt vollständige Iteration

⚠️ Collection-Stolperfallen:

ProblemSymptomLösung
Type-InkonsistenzError: Invalid value typeExplizite Type-Konvertierung
Null-WerteError: Invalid function argumentNull-Checks mit != null
PerformanceLangsame Plan-ZeitenCollection-Größe begrenzen
KomplexitätUnlesbare for-expressionsIn kleinere Operationen aufteilen
Conditional Expressions und Dynamic Blocks

Was sind Conditional Expressions? Conditional Expressions ermöglichen if-else-Logik in HCL. Sie verwenden die Ternary-Operator-Syntax (condition ? true_value : false_value) und machen deine Konfigurationen umgebungsabhängig und intelligent.

Warum sind Conditional Expressions essentiell? Sie eliminieren die Notwendigkeit für separate Konfigurationsdateien pro Umgebung. Eine einzige Terraform-Konfiguration kann sich dynamisch an verschiedene Szenarien anpassen. Das reduziert Code-Duplikation und macht deine Infrastruktur wartbarer.

Was sind Dynamic Blocks? Dynamic Blocks ermöglichen es, wiederholende Blöcke basierend auf Daten zu generieren. Sie sind wie Schleifen für Terraform-Ressourcen-Konfigurationen und machen statische, repetitive Definitionen überflüssig.

Worauf musst du bei Conditional Logic achten? Komplexe verschachtelte Conditions können unlesbar werden. Performance kann bei vielen Dynamic Blocks leiden. Type-Konsistenz zwischen true- und false-Werten ist wichtig. Debugging wird schwieriger mit komplexer Conditional Logic.

Wofür verwendest du Conditional Expressions und Dynamic Blocks? Für Environment-spezifische Konfigurationen, Feature-Flags, Resource-Counts, Security-Policies und überall, wo du flexible, datengetriebene Infrastruktur benötigst.

# Erweiterte Conditional Expressions und Dynamic Blocks
locals {
  # ============================================================================
  # BASIS-CONDITIONAL-LOGIC
  # ============================================================================
  
  # Environment-Detection und -Klassifizierung
  environment_classification = {
    is_production = var.environment == "prod"
    is_staging = var.environment == "staging"
    is_development = var.environment == "dev"
    is_non_production = var.environment != "prod"
    
    # Erweiterte Environment-Checks
    requires_high_availability = contains(["prod", "staging"], var.environment)
    allows_experimental_features = contains(["dev", "sandbox"], var.environment)
    requires_compliance = var.environment == "prod"
    supports_cost_optimization = var.environment != "prod"
  }
  
  # Time-based Conditionals
  time_based_conditions = {
    current_hour = tonumber(formatdate("hh", timestamp()))
    is_business_hours = local.time_based_conditions.current_hour >= 9 && local.time_based_conditions.current_hour <= 17
    is_weekend = contains(["saturday", "sunday"], lower(formatdate("EEEE", timestamp())))
    
    # Auto-Shutdown-Logik für Entwicklung
    should_auto_shutdown = (
      local.environment_classification.is_development &&
      !local.time_based_conditions.is_business_hours
    )
  }
  
  # ============================================================================
  # ERWEITERTE CONDITIONAL CONFIGURATIONS
  # ============================================================================
  
  # Instance-Konfiguration mit komplexer Conditional Logic
  instance_configuration = {
    # Instance Type basierend auf Environment und Workload
    instance_type = (
      local.environment_classification.is_production ? "t3.large" : 
      local.environment_classification.is_staging ? "t3.medium" : 
      "t3.micro"
    )
    
    # Instance Count mit Business-Logic
    instance_count = (
      local.environment_classification.is_production ? 5 : 
      local.environment_classification.requires_high_availability ? 3 : 
      1
    )
    
    # Storage-Konfiguration
    storage_configuration = {
      volume_type = local.environment_classification.is_production ? "gp3" : "gp2"
      volume_size = (
        local.environment_classification.is_production ? 100 : 
        local.environment_classification.is_staging ? 50 : 
        20
      )
      iops = local.environment_classification.is_production ? 3000 : null
      throughput = local.environment_classification.is_production ? 125 : null
      encrypted = local.environment_classification.requires_compliance
    }
    
    # Monitoring-Konfiguration
    monitoring = {
      enabled = local.environment_classification.requires_high_availability
      detailed_monitoring = local.environment_classification.is_production
      log_level = (
        local.environment_classification.is_production ? "ERROR" : 
        local.environment_classification.is_staging ? "WARN" : 
        "DEBUG"
      )
      retention_days = (
        local.environment_classification.is_production ? 90 : 
        local.environment_classification.is_staging ? 30 : 
        7
      )
    }
  }
  
  # Database-Konfiguration mit Conditional Logic
  database_configuration = {
    # Engine-Spezifische Konfiguration
    engine_config = {
      engine = "mysql"
      version = local.environment_classification.is_production ? "8.0.35" : "8.0.28"
      instance_class = (
        local.environment_classification.is_production ? "db.r5.xlarge" : 
        local.environment_classification.is_staging ? "db.t3.medium" : 
        "db.t3.micro"
      )
    }
    
    # Backup-Strategie
    backup_strategy = {
      backup_retention_period = (
        local.environment_classification.is_production ? 30 : 
        local.environment_classification.is_staging ? 7 : 
        0
      )
      backup_window = local.environment_classification.requires_high_availability ? "03:00-04:00" : "06:00-07:00"
      maintenance_window = local.environment_classification.requires_high_availability ? "sun:04:00-sun:05:00" : "sun:07:00-sun:08:00"
      
      # Cross-Region-Backup nur für Produktion
      copy_tags_to_snapshot = local.environment_classification.is_production
      delete_automated_backups = !local.environment_classification.requires_compliance
    }
    
    # Performance-Konfiguration
    performance_config = {
      multi_az = local.environment_classification.requires_high_availability
      performance_insights_enabled = local.environment_classification.is_production
      monitoring_interval = (
        local.environment_classification.is_production ? 15 : 
        local.environment_classification.is_staging ? 60 : 
        0
      )
    }
  }
  
  # ============================================================================
  # SECURITY-KONFIGURATION MIT CONDITIONAL LOGIC
  # ============================================================================
  
  # Security-Policies basierend auf Environment
  security_configuration = {
    # Encryption-Anforderungen
    encryption_requirements = {
      storage_encrypted = local.environment_classification.requires_compliance
      kms_key_rotation = local.environment_classification.is_production
      backup_encryption = local.environment_classification.requires_compliance
      log_encryption = local.environment_classification.is_production
    }
    
    # Network-Security
    network_security = {
      # SSH-Zugriff nur in Development
      allow_ssh_from_internet = local.environment_classification.is_development
      
      # VPC Flow Logs für Compliance
      enable_vpc_flow_logs = local.environment_classification.requires_compliance
      
      # Network ACLs für zusätzliche Sicherheit
      enable_network_acls = local.environment_classification.is_production
      
      # Bastion Host nur bei Bedarf
      deploy_bastion_host = (
        local.environment_classification.requires_high_availability && 
        !local.environment_classification.is_development
      )
    }
    
    # Compliance-Features
    compliance_features = {
      enable_cloudtrail = local.environment_classification.requires_compliance
      enable_config = local.environment_classification.requires_compliance
      enable_guardduty = local.environment_classification.is_production
      enable_security_hub = local.environment_classification.is_production
      
      # Audit-Logging
      audit_log_retention = (
        local.environment_classification.requires_compliance ? 2555 : # 7 Jahre
        local.environment_classification.is_staging ? 365 : # 1 Jahr
        30 # 30 Tage
      )
    }
  }
  
  # ============================================================================
  # DYNAMIC BLOCK DATEN-STRUKTUREN
  # ============================================================================
  
  # Security Group-Regeln für verschiedene Szenarien
  security_group_rules = {
    # Basis-Regeln für alle Environments
    base_rules = [
      {
        type = "egress"
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
        description = "All outbound traffic"
      }
    ]
    
    # Web-Traffic-Regeln
    web_rules = [
      {
        type = "ingress"
        from_port = 80
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
        description = "HTTP from internet"
      },
      {
        type = "ingress"
        from_port = 443
        to_port = 443
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
        description = "HTTPS from internet"
      }
    ]
    
    # Development-spezifische Regeln
    development_rules = local.environment_classification.is_development ? [
      {
        type = "ingress"
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
        description = "SSH for development"
      },
      {
        type = "ingress"
        from_port = 8080
        to_port = 8080
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
        description = "Development server"
      }
    ] : []
    
    # Monitoring-Regeln für Produktion
    monitoring_rules = local.environment_classification.is_production ? [
      {
        type = "ingress"
        from_port = 9090
        to_port = 9090
        protocol = "tcp"
        cidr_blocks = ["10.0.0.0/16"]
        description = "Prometheus metrics"
      },
      {
        type = "ingress"
        from_port = 3000
        to_port = 3000
        protocol = "tcp"
        cidr_blocks = ["10.0.0.0/16"]
        description = "Grafana dashboard"
      }
    ] : []
    
    # Kombinierte Regeln
    all_rules = concat(
      local.security_group_rules.base_rules,
      local.security_group_rules.web_rules,
      local.security_group_rules.development_rules,
      local.security_group_rules.monitoring_rules
    )
  }
  
  # Auto Scaling-Policies für verschiedene Szenarien
  autoscaling_policies = {
    # CPU-basierte Skalierung
    cpu_policies = local.environment_classification.requires_high_availability ? [
      {
        name = "scale-up-cpu"
        scaling_adjustment = 2
        adjustment_type = "ChangeInCapacity"
        cooldown = 300
        metric_name = "CPUUtilization"
        threshold = 70
        comparison_operator = "GreaterThanThreshold"
        evaluation_periods = 2
        period = 120
      },
      {
        name = "scale-down-cpu"
        scaling_adjustment = -1
        adjustment_type = "ChangeInCapacity"
        cooldown = 300
        metric_name = "CPUUtilization"
        threshold = 30
        comparison_operator = "LessThanThreshold"
        evaluation_periods = 2
        period = 120
      }
    ] : []
    
    # Memory-basierte Skalierung für Produktion
    memory_policies = local.environment_classification.is_production ? [
      {
        name = "scale-up-memory"
        scaling_adjustment = 1
        adjustment_type = "ChangeInCapacity"
        cooldown = 600
        metric_name = "MemoryUtilization"
        threshold = 80
        comparison_operator = "GreaterThanThreshold"
        evaluation_periods = 3
        period = 300
      }
    ] : []
    
    # Schedule-basierte Skalierung für Kostenoptimierung
    schedule_policies = local.environment_classification.supports_cost_optimization ? [
      {
        name = "scale-down-night"
        min_size = 0
        max_size = 1
        desired_capacity = 0
        recurrence = "0 22 * * MON-FRI"  # 22:00 Uhr werktags
      },
      {
        name = "scale-up-morning"
        min_size = 1
        max_size = 3
        desired_capacity = 1
        recurrence = "0 8 * * MON-FRI"   # 08:00 Uhr werktags
      }
    ] : []
    
    # Alle Policies kombiniert
    all_policies = concat(
      local.autoscaling_policies.cpu_policies,
      local.autoscaling_policies.memory_policies,
      local.autoscaling_policies.schedule_policies
    )
  }
}

# Erweiterte Ressourcen mit Dynamic Blocks und Conditional Logic
resource "aws_instance" "application" {
  count = local.instance_configuration.instance_count
  
  ami           = data.aws_ami.ubuntu.id
  instance_type = local.instance_configuration.instance_type
  
  # Conditional Subnet-Placement
  subnet_id = var.subnet_ids[count.index % length(var.subnet_ids)]
  
  # Conditional Storage-Konfiguration
  root_block_device {
    volume_type = local.instance_configuration.storage_configuration.volume_type
    volume_size = local.instance_configuration.storage_configuration.volume_size
    encrypted   = local.instance_configuration.storage_configuration.encrypted
    
    # Conditional IOPS und Throughput für gp3
    iops       = local.instance_configuration.storage_configuration.volume_type == "gp3" ? local.instance_configuration.storage_configuration.iops : null
    throughput = local.instance_configuration.storage_configuration.volume_type == "gp3" ? local.instance_configuration.storage_configuration.throughput : null
  }
  
  # Conditional Monitoring
  monitoring = local.instance_configuration.monitoring.detailed_monitoring
  
  # Dynamic User Data basierend auf Environment
  user_data = base64encode(templatefile("${path.module}/userdata.sh", {
    environment = var.environment
    instance_index = count.index
    log_level = local.instance_configuration.monitoring.log_level
    monitoring_enabled = local.instance_configuration.monitoring.enabled
    is_production = local.environment_classification.is_production
  }))
  
  # Conditional Auto-Shutdown für Development
  dynamic "credit_specification" {
    for_each = local.environment_classification.supports_cost_optimization && startswith(local.instance_configuration.instance_type, "t") ? [1] : []
    content {
      cpu_credits = "unlimited"
    }
  }
  
  tags = merge(
    var.base_tags,
    {
      Name = "${var.project_name}-${var.environment}-app-${count.index + 1}"
      Environment = var.environment
      InstanceIndex = count.index
      AutoShutdown = local.time_based_conditions.should_auto_shutdown
    }
  )
  
  lifecycle {
    # Conditional Lifecycle-Regeln
    create_before_destroy = local.environment_classification.requires_high_availability
    ignore_changes = local.environment_classification.is_development ? ["ami"] : []
  }
}

# Security Group mit Dynamic Blocks für flexible Regeln
resource "aws_security_group" "application" {
  name_prefix = "${var.project_name}-${var.environment}-app-"
  vpc_id      = var.vpc_id
  
  # Dynamic Ingress-Regeln
  dynamic "ingress" {
    for_each = {
      for rule in local.security_group_rules.all_rules :
      "${rule.type}-${rule.from_port}-${rule.to_port}" => rule
      if rule.type == "ingress"
    }
    
    content {
      from_port   = ingress.value.from_port
      to_port     = ingress.value.to_port
      protocol    = ingress.value.protocol
      cidr_blocks = ingress.value.cidr_blocks
      description = ingress.value.description
    }
  }
  
  # Dynamic Egress-Regeln
  dynamic "egress" {
    for_each = {
      for rule in local.security_group_rules.all_rules :
      "${rule.type}-${rule.from_port}-${rule.to_port}" => rule
      if rule.type == "egress"
    }
    
    content {
      from_port   = egress.value.from_port
      to_port     = egress.value.to_port
      protocol    = egress.value.protocol
      cidr_blocks = egress.value.cidr_blocks
      description = egress.value.description
    }
  }
  
  tags = {
    Name = "${var.project_name}-${var.environment}-app-sg"
    Environment = var.environment
    RuleCount = length(local.security_group_rules.all_rules)
  }
}

# Auto Scaling Group mit Dynamic Scaling Policies
resource "aws_autoscaling_group" "application" {
  count = local.environment_classification.requires_high_availability ? 1 : 0
  
  name                = "${var.project_name}-${var.environment}-asg"
  vpc_zone_identifier = var.subnet_ids
  
  min_size         = local.instance_configuration.instance_count
  max_size         = local.instance_configuration.instance_count * 3
  desired_capacity = local.instance_configuration.instance_count
  
  launch_template {
    id      = aws_launch_template.application.id
    version = "$Latest"
  }
  
  # Conditional Health Check
  health_check_type         = local.environment_classification.is_production ? "ELB" : "EC2"
  health_check_grace_period = local.environment_classification.is_production ? 300 : 60
  
  # Dynamic Tags
  dynamic "tag" {
    for_each = merge(
      var.base_tags,
      {
        Name = "${var.project_name}-${var.environment}-asg"
        Environment = var.environment
        AutoScaling = "enabled"
      }
    )
    
    content {
      key                 = tag.key
      value               = tag.value
      propagate_at_launch = true
    }
  }
}

# CloudWatch Alarms mit Dynamic Blocks
resource "aws_cloudwatch_metric_alarm" "application_alarms" {
  for_each = {
    for policy in local.autoscaling_policies.all_policies :
    policy.name => policy
    if policy.metric_name != null
  }
  
  alarm_name          = "${var.project_name}-${var.environment}-${each.value.name}"
  comparison_operator = each.value.comparison_operator
  evaluation_periods  = each.value.evaluation_periods
  metric_name         = each.value.metric_name
  namespace           = "AWS/EC2"
  period              = each.value.period
  statistic           = "Average"
  threshold           = each.value.threshold
  alarm_description   = "Auto scaling alarm for ${each.value.name}"
  
  # Conditional Alarm Actions
  alarm_actions = local.environment_classification.requires_high_availability ? [
    aws_autoscaling_policy.application_policies[each.key].arn
  ] : []
  
  # Dynamic Dimensions
  dynamic "dimensions" {
    for_each = local.environment_classification.requires_high_availability ? [1] : []
    content {
      AutoScalingGroupName = aws_autoscaling_group.application[0].name
    }
  }
  
  tags = {
    Name = "${var.project_name}-${var.environment}-${each.value.name}-alarm"
    Environment = var.environment
    MetricName = each.value.metric_name
  }
}
HCL

Conditional Logic Flow-Diagram:

┌─────────────────────────────────────────────────────────────┐
│                Conditional Logic Flow                       │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  Environment Input                                          │
│         ↓                                                   │
│  ┌─────────────────────┐                                    │
│  │ Environment         │                                    │
│  │ Classification      │                                    │
│  │ is_production       │                                    │
│  │ requires_ha         │                                    │
│  │ allows_experimental │                                    │
│  └─────────────────────┘                                    │
│         ↓                                                   │
│  ┌──────────────────────┐    ┌─────────────────────────┐    │
│  │ Instance Config      │    │ Security Config         │    │
│  │ type: conditional    │    │ encryption: conditional │    │
│  │ count: conditional   │    │ monitoring: conditional │    │
│  │ storage: conditional │    │ compliance: conditional │    │
│  └──────────────────────┘    └─────────────────────────┘    │
│         ↓                       ↓                           │
│  ┌──────────────────────┐    ┌─────────────────┐            │
│  │ Dynamic Blocks       │    │ Resource        │            │
│  │ security_rules       │    │ Configuration   │            │
│  │ scaling_policies     │    │ Final State     │            │
│  │ monitoring_targets   │    └─────────────────┘            │
│  └──────────────────────┘                                   │
└─────────────────────────────────────────────────────────────┘
Markdown

Conditional Logic Best Practices:

PatternAnwendungPerformanceWartbarkeit
Ternary OperatorEinfache if-elseSehr hochHoch
Complex ConditionsMulti-KriterienHochMittel
Dynamic BlocksWiederholende StrukturMittelHoch
for_each mit ConditionsConditional ResourcesMittelSehr hoch

💡 Conditional Logic-Optimierungen:

┌ Verwende for_each statt count für bessere State-Verwaltung
├ Cache komplexe Collection-Operationen in Locals
├ Vermeide tiefe Verschachtelungen von for-expressions
└ Nutze slice() für große Listen statt vollständige Iteration

⚠️ Conditional Logic-Stolperfallen:

ProblemSymptomLösung
Type-MismatchError: Inconsistent conditional result typesGleiche Typen in both branches
Null-ConditionsError: Invalid value for conditionalNull-Checks hinzufügen
PerformanceLangsame EvaluationConditions in Locals cachen
DebuggingSchwer nachvollziehbare LogikEinfachere Conditions verwenden

Debugging-Techniken für Conditional Logic:

# Conditional Logic in terraform console testen
terraform console
> local.environment_classification.is_production
> local.instance_configuration.instance_type
	> local.security_group_rules.all_rules
	
# Dynamic Block-Inhalte inspizieren
terraform plan | grep -A 10 "dynamic"

# Conditional Resources validieren
terraform state list | grep conditional_resource
Bash

For-Expressions für komplexe Transformationen

Was sind For-Expressions? For-Expressions sind eine der mächtigsten HCL-Funktionen für Daten-Transformation. Sie kombinieren die Flexibilität von Loops mit der Ausdruckskraft von Conditional Logic und ermöglichen es dir, komplexe Datenstrukturen in einem einzigen, eleganten Ausdruck zu erstellen. For-Expressions sind wie SQL-Queries für deine Terraform-Konfiguration.

Warum sind For-Expressions so wichtig? Sie verwandeln statische Terraform-Konfigurationen in dynamische, datengetriebene Infrastruktur-Definitionen. Ohne For-Expressions müsstest du repetitive Ressourcen manuell definieren oder externe Skripte verwenden. Mit ihnen erstellst du flexible, skalierbare Konfigurationen, die sich automatisch an changing Datenstrukturen anpassen.

Worauf musst du bei For-Expressions achten? Komplexe verschachtelte For-Expressions können schwer lesbar und schwer zu debuggen sein. Performance kann bei großen Datensätzen leiden. Type-Sicherheit ist wichtig – stelle sicher, dass alle Iterationen konsistente Datentypen zurückgeben. Fehlende Null-Checks können zu Laufzeitfehlern führen.

Wofür verwendest du For-Expressions? Für dynamische Ressourcen-Generierung, Daten-Aggregation, komplexe Transformationen zwischen verschiedenen Datenformaten, Filtering mit Business-Logic und überall, wo du aus einer Datenstruktur eine andere erstellen musst.

🔧 Praktisches Beispiel – Grundlegende For-Expression-Syntax:

# Erweiterte For-Expressions für komplexe Daten-Transformationen
locals {
  # ============================================================================
  # BASIS-DATEN FÜR FOR-EXPRESSION-BEISPIELE
  # ============================================================================
  
  # Komplexe Eingangsdaten
  application_services = {
    frontend = {
      name = "frontend"
      port = 3000
      replicas = 3
      cpu_limit = "500m"
      memory_limit = "512Mi"
      environment = "production"
      health_check = "/health"
      dependencies = ["backend", "cache"]
      labels = {
        tier = "presentation"
        version = "v1.2.3"
        team = "frontend-team"
      }
      volumes = [
        {
          name = "config"
          path = "/app/config"
          size = "1Gi"
        },
        {
          name = "logs"
          path = "/app/logs"
          size = "5Gi"
        }
      ]
    }
    backend = {
      name = "backend"
      port = 8080
      replicas = 5
      cpu_limit = "1000m"
      memory_limit = "1Gi"
      environment = "production"
      health_check = "/api/health"
      dependencies = ["database", "cache"]
      labels = {
        tier = "application"
        version = "v2.1.0"
        team = "backend-team"
      }
      volumes = [
        {
          name = "data"
          path = "/app/data"
          size = "10Gi"
        }
      ]
    }
    worker = {
      name = "worker"
      port = 9000
      replicas = 2
      cpu_limit = "750m"
      memory_limit = "768Mi"
      environment = "production"
      health_check = "/worker/health"
      dependencies = ["database", "queue"]
      labels = {
        tier = "worker"
        version = "v1.0.5"
        team = "backend-team"
      }
      volumes = []
    }
    cache = {
      name = "cache"
      port = 6379
      replicas = 1
      cpu_limit = "250m"
      memory_limit = "256Mi"
      environment = "production"
      health_check = "/ping"
      dependencies = []
      labels = {
        tier = "cache"
        version = "v6.2.0"
        team = "platform-team"
      }
      volumes = [
        {
          name = "cache-data"
          path = "/data"
          size = "2Gi"
        }
      ]
    }
  }
  
  # Availability Zones und Regions
  regions = {
    "us-east-1" = {
      azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
      vpc_cidr = "10.0.0.0/16"
      instance_types = ["t3.micro", "t3.small", "t3.medium"]
    }
    "us-west-2" = {
      azs = ["us-west-2a", "us-west-2b", "us-west-2c"]
      vpc_cidr = "10.1.0.0/16"
      instance_types = ["t3.small", "t3.medium", "t3.large"]
    }
    "eu-west-1" = {
      azs = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
      vpc_cidr = "10.2.0.0/16"
      instance_types = ["t3.medium", "t3.large", "t3.xlarge"]
    }
  }
  
  # Environment-Konfigurationen
  environments = {
    dev = {
      replica_multiplier = 0.5
      cpu_multiplier = 0.5
      memory_multiplier = 0.5
      storage_multiplier = 0.5
      monitoring_enabled = false
      backup_enabled = false
    }
    staging = {
      replica_multiplier = 0.8
      cpu_multiplier = 0.8
      memory_multiplier = 0.8
      storage_multiplier = 0.8
      monitoring_enabled = true
      backup_enabled = true
    }
    prod = {
      replica_multiplier = 1.0
      cpu_multiplier = 1.0
      memory_multiplier = 1.0
      storage_multiplier = 1.0
      monitoring_enabled = true
      backup_enabled = true
    }
  }
  
  # ============================================================================
  # GRUNDLEGENDE FOR-EXPRESSION-PATTERNS
  # ============================================================================
  
  # Einfache List-Comprehension
  basic_list_transformations = {
    # Service-Namen extrahieren
    service_names = [
      for service_name, config in local.application_services :
      service_name
    ]
    
    # Ports extrahieren
    service_ports = [
      for service_name, config in local.application_services :
      config.port
    ]
    
    # Service-Namen in Uppercase
    service_names_upper = [
      for service_name, config in local.application_services :
      upper(service_name)
    ]
    
    # Kombinierte Werte
    service_endpoints = [
      for service_name, config in local.application_services :
      "${service_name}:${config.port}"
    ]
  }
  
  # Einfache Object-Comprehension
  basic_object_transformations = {
    # Port-Mapping
    port_mapping = {
      for service_name, config in local.application_services :
      service_name => config.port
    }
    
    # Health-Check-Mapping
    health_check_mapping = {
      for service_name, config in local.application_services :
      service_name => config.health_check
    }
    
    # Replica-Mapping
    replica_mapping = {
      for service_name, config in local.application_services :
      service_name => config.replicas
    }
    
    # Label-Tier-Mapping
    tier_mapping = {
      for service_name, config in local.application_services :
      service_name => config.labels.tier
    }
  }
  
  # ============================================================================
  # CONDITIONAL FOR-EXPRESSIONS
  # ============================================================================
  
  # For-Expressions mit Conditional Logic
  conditional_transformations = {
    # Nur Web-Services (Frontend/Backend)
    web_services = {
      for service_name, config in local.application_services :
      service_name => config
      if contains(["frontend", "backend"], service_name)
    }
    
    # Services mit hohem Memory-Bedarf
    memory_intensive_services = [
      for service_name, config in local.application_services :
      service_name
      if tonumber(split("Mi", config.memory_limit)[0]) > 500
    ]
    
    # Services mit Dependencies
    services_with_dependencies = {
      for service_name, config in local.application_services :
      service_name => config.dependencies
      if length(config.dependencies) > 0
    }
    
    # Services mit Volumes
    services_with_storage = {
      for service_name, config in local.application_services :
      service_name => {
        volumes = config.volumes
        total_storage = sum([
          for volume in config.volumes :
          tonumber(split("Gi", volume.size)[0])
        ])
      }
      if length(config.volumes) > 0
    }
    
    # Team-basierte Gruppierung
    backend_team_services = [
      for service_name, config in local.application_services :
      service_name
      if config.labels.team == "backend-team"
    ]
    
    # Environment-spezifische Konfiguration
    production_ready_services = {
      for service_name, config in local.application_services :
      service_name => merge(config, {
        # Produktions-spezifische Anpassungen
        replicas = max(2, config.replicas)  # Minimum 2 Replicas
        monitoring_enabled = true
        backup_enabled = true
      })
      if config.environment == "production"
    }
  }
  
  # ============================================================================
  # VERSCHACHTELTE FOR-EXPRESSIONS
  # ============================================================================
  
  # Komplexe verschachtelte Transformationen
  nested_transformations = {
    # Alle Service-Volume-Kombinationen
    all_service_volumes = flatten([
      for service_name, config in local.application_services : [
        for volume in config.volumes : {
          service = service_name
          volume_name = volume.name
          volume_path = volume.path
          volume_size = volume.size
          full_name = "${service_name}-${volume.name}"
        }
      ]
    ])
    
    # Service-Dependency-Graph
    service_dependency_graph = {
      for service_name, config in local.application_services :
      service_name => {
        depends_on = config.dependencies
        dependents = [
          for other_service, other_config in local.application_services :
          other_service
          if contains(other_config.dependencies, service_name)
        ]
        total_connections = length(config.dependencies) + length([
          for other_service, other_config in local.application_services :
          other_service
          if contains(other_config.dependencies, service_name)
        ])
      }
    }
    
    # Multi-Region-Service-Deployment
    multi_region_deployments = {
      for region_name, region_config in local.regions :
      region_name => {
        region = region_name
        vpc_cidr = region_config.vpc_cidr
        services = {
          for service_name, service_config in local.application_services :
          service_name => {
            service = service_name
            region = region_name
            subnets = [
              for i, az in region_config.azs :
              {
                name = "${service_name}-${region_name}-${i + 1}"
                az = az
                cidr = cidrsubnet(region_config.vpc_cidr, 8, i)
                instance_type = region_config.instance_types[i % length(region_config.instance_types)]
              }
            ]
            load_balancer_targets = [
              for i, az in region_config.azs :
              {
                target_id = "${service_name}-${region_name}-${i + 1}"
                az = az
                port = service_config.port
                health_check = service_config.health_check
              }
            ]
          }
        }
      }
    }
    
    # Environment-Service-Matrix
    environment_service_matrix = {
      for env_name, env_config in local.environments :
      env_name => {
        environment = env_name
        services = {
          for service_name, service_config in local.application_services :
          service_name => {
            name = service_name
            environment = env_name
            replicas = max(1, floor(service_config.replicas * env_config.replica_multiplier))
            cpu_limit = "${floor(tonumber(split("m", service_config.cpu_limit)[0]) * env_config.cpu_multiplier)}m"
            memory_limit = "${floor(tonumber(split("Mi", service_config.memory_limit)[0]) * env_config.memory_multiplier)}Mi"
            monitoring_enabled = env_config.monitoring_enabled
            backup_enabled = env_config.backup_enabled
            volumes = [
              for volume in service_config.volumes :
              {
                name = volume.name
                path = volume.path
                size = "${floor(tonumber(split("Gi", volume.size)[0]) * env_config.storage_multiplier)}Gi"
              }
            ]
            full_name = "${service_name}-${env_name}"
          }
        }
        total_replicas = sum([
          for service_name, service_config in local.application_services :
          max(1, floor(service_config.replicas * env_config.replica_multiplier))
        ])
        total_cpu = sum([
          for service_name, service_config in local.application_services :
          floor(tonumber(split("m", service_config.cpu_limit)[0]) * env_config.cpu_multiplier)
        ])
        total_memory = sum([
          for service_name, service_config in local.application_services :
          floor(tonumber(split("Mi", service_config.memory_limit)[0]) * env_config.memory_multiplier)
        ])
      }
    }
  }
  
  # ============================================================================
  # ERWEITERTE FOR-EXPRESSION-PATTERNS
  # ============================================================================
  
  # Komplexe Daten-Aggregation
  advanced_aggregations = {
    # Team-basierte Statistiken
    team_statistics = {
      for team in distinct([
        for service_name, config in local.application_services :
        config.labels.team
      ]) :
      team => {
        team_name = team
        services = [
          for service_name, config in local.application_services :
          service_name
          if config.labels.team == team
        ]
        total_replicas = sum([
          for service_name, config in local.application_services :
          config.replicas
          if config.labels.team == team
        ])
        total_cpu = sum([
          for service_name, config in local.application_services :
          tonumber(split("m", config.cpu_limit)[0])
          if config.labels.team == team
        ])
        total_memory = sum([
          for service_name, config in local.application_services :
          tonumber(split("Mi", config.memory_limit)[0])
          if config.labels.team == team
        ])
        service_count = length([
          for service_name, config in local.application_services :
          service_name
          if config.labels.team == team
        ])
      }
    }
    
    # Tier-basierte Konfiguration
    tier_configurations = {
      for tier in distinct([
        for service_name, config in local.application_services :
        config.labels.tier
      ]) :
      tier => {
        tier_name = tier
        services = {
          for service_name, config in local.application_services :
          service_name => {
            port = config.port
            replicas = config.replicas
            health_check = config.health_check
            dependencies = config.dependencies
            resource_requirements = {
              cpu = config.cpu_limit
              memory = config.memory_limit
            }
          }
          if config.labels.tier == tier
        }
        load_balancer_config = {
          enabled = contains(["presentation", "application"], tier)
          port = tier == "presentation" ? 80 : 8080
          health_check_path = "/health"
          targets = [
            for service_name, config in local.application_services :
            {
              service = service_name
              port = config.port
              health_check = config.health_check
            }
            if config.labels.tier == tier
          ]
        }
        security_group_rules = [
          for service_name, config in local.application_services : {
            service = service_name
            port = config.port
            tier = tier
            rule = {
              type = "ingress"
              from_port = config.port
              to_port = config.port
              protocol = "tcp"
              description = "Allow traffic to ${service_name} in ${tier} tier"
              source_tier = tier == "presentation" ? "internet" : "application"
            }
          }
          if config.labels.tier == tier
        ]
      }
    }
    
    # Dependency-Chain-Analyse
    dependency_chains = {
      for service_name, config in local.application_services :
      service_name => {
        service = service_name
        direct_dependencies = config.dependencies
        indirect_dependencies = flatten([
          for dep in config.dependencies : [
            for indirect_dep in lookup(local.application_services, dep, {}).dependencies :
            indirect_dep
            if !contains(config.dependencies, indirect_dep)
          ]
        ])
        all_dependencies = distinct(concat(
          config.dependencies,
          flatten([
            for dep in config.dependencies : [
              for indirect_dep in lookup(local.application_services, dep, {}).dependencies :
              indirect_dep
              if !contains(config.dependencies, indirect_dep)
            ]
          ])
        ))
        dependency_depth = length(distinct(concat(
          config.dependencies,
          flatten([
            for dep in config.dependencies : [
              for indirect_dep in lookup(local.application_services, dep, {}).dependencies :
              indirect_dep
              if !contains(config.dependencies, indirect_dep)
            ]
          ])
        )))
        is_leaf_service = length(config.dependencies) == 0
        is_root_service = length([
          for other_service, other_config in local.application_services :
          other_service
          if contains(other_config.dependencies, service_name)
        ]) == 0
      }
    }
  }
  
  # ============================================================================
  # PERFORMANCE-OPTIMIERTE FOR-EXPRESSIONS
  # ============================================================================
  
  # Optimierte Transformationen für große Datensätze
  optimized_transformations = {
    # Service-Lookup-Table (schneller Zugriff)
    service_lookup = {
      for service_name, config in local.application_services :
      service_name => {
        port = config.port
        health_check = config.health_check
        tier = config.labels.tier
        team = config.labels.team
      }
    }
    
    # Port-Index für schnelle Suche
    port_index = {
      for service_name, config in local.application_services :
      config.port => service_name
    }
    
    # Team-Index für gruppierte Operationen
    team_index = {
      for team in distinct([
        for service_name, config in local.application_services :
        config.labels.team
      ]) :
      team => [
        for service_name, config in local.application_services :
        service_name
        if config.labels.team == team
      ]
    }
    
    # Tier-Index für Load Balancer-Konfiguration
    tier_index = {
      for tier in distinct([
        for service_name, config in local.application_services :
        config.labels.tier
      ]) :
      tier => {
        services = [
          for service_name, config in local.application_services :
          service_name
          if config.labels.tier == tier
        ]
        total_replicas = sum([
          for service_name, config in local.application_services :
          config.replicas
          if config.labels.tier == tier
        ])
        needs_load_balancer = contains(["presentation", "application"], tier)
      }
    }
    
    # Cached berechnete Werte
    resource_summary = {
      total_services = length(keys(local.application_services))
      total_replicas = sum([
        for service_name, config in local.application_services :
        config.replicas
      ])
      total_cpu_millicores = sum([
        for service_name, config in local.application_services :
        tonumber(split("m", config.cpu_limit)[0]) * config.replicas
      ])
      total_memory_mi = sum([
        for service_name, config in local.application_services :
        tonumber(split("Mi", config.memory_limit)[0]) * config.replicas
      ])
      total_storage_gi = sum([
        for service_name, config in local.application_services :
        sum([
          for volume in config.volumes :
          tonumber(split("Gi", volume.size)[0])
        ])
      ])
      unique_teams = length(distinct([
        for service_name, config in local.application_services :
        config.labels.team
      ]))
      unique_tiers = length(distinct([
        for service_name, config in local.application_services :
        config.labels.tier
      ]))
    }
  }
}
HCL

For-Expression-Syntax-Varianten:

SyntaxZweckBeispielOutput-Typ
[for item in list : expression]List-Comprehension[for s in services : s.name]List
{for key, value in map : key => expression}Object-Comprehension{for k, v in map : k => v.port}Map
[for item in list : expression if condition]Conditional List[for s in services : s if s.enabled]List
{for key, value in map : key => expression if condition}Conditional Object{for k, v in map : k => v if v.active}Map

🔧 Praktische Anwendung in echten Ressourcen:

# Dynamische Kubernetes-Deployments mit For-Expressions
resource "kubernetes_deployment" "services" {
  for_each = local.conditional_transformations.web_services
  
  metadata {
    name = each.key
    labels = merge(
      each.value.labels,
      {
        managed-by = "terraform"
        environment = var.environment
      }
    )
  }
  
  spec {
    replicas = each.value.replicas
    
    selector {
      match_labels = {
        app = each.key
      }
    }
    
    template {
      metadata {
        labels = merge(
          each.value.labels,
          {
            app = each.key
          }
        )
      }
      
      spec {
        container {
          name  = each.key
          image = "${each.key}:${each.value.labels.version}"
          
          port {
            container_port = each.value.port
          }
          
          # Dynamic Environment Variables
          dynamic "env" {
            for_each = {
              for key, value in merge(
                {
                  SERVICE_NAME = each.key
                  SERVICE_PORT = tostring(each.value.port)
                  ENVIRONMENT = var.environment
                },
                # Service-spezifische Umgebungsvariablen
                lookup(local.service_environment_variables, each.key, {})
              ) :
              key => value
            }
            
            content {
              name  = env.key
              value = env.value
            }
          }
          
          # Dynamic Volume Mounts
          dynamic "volume_mount" {
            for_each = {
              for volume in each.value.volumes :
              volume.name => volume
            }
            
            content {
              name       = volume_mount.key
              mount_path = volume_mount.value.path
            }
          }
          
          # Health Check
          liveness_probe {
            http_get {
              path = each.value.health_check
              port = each.value.port
            }
            initial_delay_seconds = 30
            period_seconds = 10
          }
          
          readiness_probe {
            http_get {
              path = each.value.health_check
              port = each.value.port
            }
            initial_delay_seconds = 5
            period_seconds = 5
          }
          
          resources {
            limits = {
              cpu    = each.value.cpu_limit
              memory = each.value.memory_limit
            }
            requests = {
              cpu    = "${floor(tonumber(split("m", each.value.cpu_limit)[0]) * 0.5)}m"
              memory = "${floor(tonumber(split("Mi", each.value.memory_limit)[0]) * 0.5)}Mi"
            }
          }
        }
        
        # Dynamic Volumes
        dynamic "volume" {
          for_each = {
            for volume in each.value.volumes :
            volume.name => volume
          }
          
          content {
            name = volume.key
            persistent_volume_claim {
              claim_name = "${each.key}-${volume.key}-pvc"
            }
          }
        }
      }
    }
  }
}

# Load Balancer Services mit For-Expressions
resource "kubernetes_service" "load_balancers" {
  for_each = local.nested_transformations.multi_region_deployments["us-east-1"].services
  
  metadata {
    name = "${each.key}-lb"
    labels = {
      service = each.key
      type = "load-balancer"
    }
  }
  
  spec {
    selector = {
      app = each.key
    }
    
    # Dynamic Ports
    dynamic "port" {
      for_each = {
        for target in each.value.load_balancer_targets :
        target.target_id => target
      }
      
      content {
        name        = "http"
        port        = 80
        target_port = port.value.port
        protocol    = "TCP"
      }
    }
    
    type = "LoadBalancer"
  }
}

# Persistent Volume Claims mit For-Expressions
resource "kubernetes_persistent_volume_claim" "service_storage" {
  for_each = {
    for volume_config in local.nested_transformations.all_service_volumes :
    volume_config.full_name => volume_config
  }
  
  metadata {
    name = "${each.key}-pvc"
    labels = {
      service = each.value.service
      volume = each.value.volume_name
    }
  }
  
  spec {
    access_modes = ["ReadWriteOnce"]
    resources {
      requests = {
        storage = each.value.volume_size
      }
    }
    storage_class_name = "gp2"
  }
}
HCL

For-Expression-Performance-Optimierung:

┌────────────────────────────────────────────────────────────┐
│                For-Expression Performance                   │
├────────────────────────────────────────────────────────────┤
│                                                            │
│  1. DATEN-VORBEREITUNG                                     │
│     ├─ Kleine Datensätze verwenden                         │
│     ├─ Unnötige Felder entfernen                           │
│     ├─ Indizierung für Lookups                             │
│     └─ Caching in Locals                                   │
│                                                            │
│  2. EXPRESSION-OPTIMIERUNG                                 │
│     ├─ Einfache Expressions bevorzugen                     │
│     ├─ Verschachtelung minimieren                          │
│     ├─ Conditional Logic früh anwenden                     │
│     └─ Flattening nur wenn nötig                           │
│                                                            │
│  3. ITERATION-EFFIZIENZ                                    │
│     ├─ for_each > count                                    │
│     ├─ Distinct() für Duplikate                            │
│     ├─ Slice() für Teilmengen                              │
│     └─ Lookup() für Zugriffe                               │
│                                                            │
│  4. MEMORY-MANAGEMENT                                      │
│     ├─ Große Listen vermeiden                              │
│     ├─ Streaming-Pattern nutzen                            │
│     ├─ Garbage Collection beachten                         │
│     └─ State-Größe minimieren                              │
│                                                            │
└────────────────────────────────────────────────────────────┘
Markdown

For-Expression-Debugging-Strategien:

Debugging-TechnikAnwendungBeispiel
Schrittweise AufbauKomplexe Expressions teilenEine Transformation pro Schritt
Terraform ConsoleInteraktive Teststerraform console
Print-DebuggingOutputs für Zwischenergebnisseoutput "debug" { value = local.test }
Type-ValidationKonsistenz prüfencan() für Type-Checks

💡 For-Expression-Best-Practices:

┌ Verwende aussagekräftige Variable-Namen in Iterationen
├ Halte Expressions so einfach wie möglich
├ Cache komplexe Berechnungen in Locals
├ Nutze Conditional Logic für Performance-Optimierung
└ Dokumentiere komplexe Transformationen ausführlich

⚠️ For-Expression-Stolperfallen:

ProblemSymptomLösung
Zirkuläre AbhängigkeitenError: Cycle in valuesAbhängigkeiten neu strukturieren
Type-InkonsistenzError: Inconsistent typesExplizite Type-Konvertierung
Performance-ProblemeLangsame Plan-ZeitenExpressions optimieren
Memory-ErschöpfungError: Out of memoryDatensätze verkleinern
Komplexe Nested LoopsUnlesbare ExpressionsIn mehrere Schritte aufteilen

For-Expression-Debugging-Techniken:

# For-Expression-Ergebnisse in terraform console testen
terraform console
> [for s in local.application_services : s.name]
> {for k, v in local.application_services : k => v.port}
> local.nested_transformations.all_service_volumes

# Schrittweise Debugging
terraform console
> local.application_services
> [for s in local.application_services : s]
> [for s in local.application_services : s.name]
> [for s in local.application_services : s.name if s.replicas > 2]

# Performance-Analyse
terraform plan -detailed-exitcode
time terraform plan
Bash

Erweiterte For-Expression-Patterns:

# Erweiterte Patterns für Production-Code
locals {
  # Pattern 1: Hierarchische Daten flattieren
  flattened_configuration = flatten([
    for region_name, region in local.regions : [
      for service_name, service in local.application_services : {
        key = "${region_name}-${service_name}"
        region = region_name
        service = service_name
        az_count = length(region.azs)
        vpc_cidr = region.vpc_cidr
        service_port = service.port
        service_replicas = service.replicas
        full_config = merge(service, {
          region = region_name
          azs = region.azs
        })
      }
    ]
  ])
  
  # Pattern 2: Lookup-Tables für Performance
  service_by_port = {
    for service_name, config in local.application_services :
    config.port => service_name
  }
  
  # Pattern 3: Conditional Aggregation
  resource_allocation = {
    for env_name, env_config in local.environments :
    env_name => {
      services = {
        for service_name, service_config in local.application_services :
        service_name => {
          replicas = max(1, ceil(service_config.replicas * env_config.replica_multiplier))
          cpu_total = ceil(tonumber(split("m", service_config.cpu_limit)[0]) * env_config.cpu_multiplier * max(1, ceil(service_config.replicas * env_config.replica_multiplier)))
          memory_total = ceil(tonumber(split("Mi", service_config.memory_limit)[0]) * env_config.memory_multiplier * max(1, ceil(service_config.replicas * env_config.replica_multiplier)))
        }
      }
      totals = {
        cpu = sum([
          for service_name, service_config in local.application_services :
          ceil(tonumber(split("m", service_config.cpu_limit)[0]) * env_config.cpu_multiplier * max(1, ceil(service_config.replicas * env_config.replica_multiplier)))
        ])
        memory = sum([
          for service_name, service_config in local.application_services :
          ceil(tonumber(split("Mi", service_config.memory_limit)[0]) * env_config.memory_multiplier * max(1, ceil(service_config.replicas * env_config.replica_multiplier)))
        ])
        replicas = sum([
          for service_name, service_config in local.application_services :
          max(1, ceil(service_config.replicas * env_config.replica_multiplier))
        ])
      }
    }
  }
  
  # Pattern 4: Error-Handling mit can()
  safe_transformations = {
    for service_name, config in local.application_services :
    service_name => {
      cpu_millicores = can(tonumber(split("m", config.cpu_limit)[0])) ? tonumber(split("m", config.cpu_limit)[0]) : 0
      memory_mi = can(tonumber(split("Mi", config.memory_limit)[0])) ? tonumber(split("Mi", config.memory_limit)[0]) : 0
      has_health_check = can(config.health_check) && config.health_check != null
      volume_count = can(length(config.volumes)) ? length(config.volumes) : 0
      has_dependencies = can(length(config.dependencies)) ? length(config.dependencies) > 0 : false
    }
  }
}
HCL

For-Expressions sind das mächtigste Werkzeug für Daten-Transformation in Terraform. Sie ermöglichen es dir, aus statischen Konfigurationen dynamische, datengetriebene Infrastruktur zu erstellen. Mit List-Comprehensions, Object-Comprehensions, Conditional Logic und Nested Loops kannst du komplexe Transformationen in eleganten, lesbaren Ausdrücken implementieren. Diese Beherrschung der For-Expressions macht dich zum HCL-Experten, der Terraform-Konfigurationen schreibt, die sich automatisch an changing Anforderungen anpassen.

Weiterführende Ressourcen

Nach dem Abschluss dieses Grundlagen-Artikels hast du das Fundament für deine Terraform-Reise gelegt. Hier findest du die wichtigsten Ressourcen, um dein Wissen zu vertiefen und praktische Erfahrungen zu sammeln.

Offizielle Dokumentation

HashiCorp Terraform Documentation

Die offizielle Dokumentation ist deine beste Anlaufstelle für detaillierte Informationen zu allen Terraform-Features. Sie wird regelmäßig aktualisiert und bietet sowohl Einsteiger- als auch Fortgeschrittenen-Inhalte.

Provider-Dokumentation

AWS Provider:

Azure Provider:

Google Cloud Provider:

Terraform Registry

Terraform Registry

Das Terraform Registry ist deine zentrale Anlaufstelle für öffentliche Provider und Module. Hier findest du tausende vorgefertigte Module für gängige Infrastruktur-Patterns.

Empfohlene Bücher

Kostenlose Ressourcen:

  • Terraform Best Practices (Free): Kostenlose Online-Ressource mit bewährten Praktiken
  • HashiCorp’s Terraform Documentation (Free): Umfassende kostenlose Dokumentation

Kostenpflichtige Bücher:

  • „Terraform: Up & Running“ von Yevgeniy Brikman: Eines der beliebtesten Terraform-Bücher
  • „Terraform in Action“ von Scott Winkler: Praktischer Leitfaden mit realen Beispielen
  • „The Terraform Book“ von James Turnbull: Praktische Anleitung für Einsteiger
Online-Kurse und Tutorials

Kostenlose Kurse:

  • Hands-On Terraform Foundations (Udemy): 3-stündiger Kurs für absolute Einsteiger
  • Terraform 101 (Udemy): 2-stündiger Kurs zu den Terraform-Grundlagen
  • Terraform + AWS (Udemy): 2-stündiger Kurs für AWS-Integration

Offizielle HashiCorp-Ressourcen:

  • Get Started Tutorials: Tutorials für AWS, Azure, Google Cloud und Docker
  • Terraform Associate Certification: Vorbereitung auf die offizielle Zertifizierung
Community und Support

Community-Ressourcen:

  • Terraform Community auf GitHub: https://github.com/shuaibiyy/awesome-tf
  • Terraform Discuss: Offizielles Community-Forum
  • Terraform Twitter Community: Aktive Twitter-Community
  • weekly.tf: Wöchentlicher Terraform-Newsletter
Praktische Tools

Terraform-docs:

Tool zur automatischen Generierung von Dokumentation aus Terraform-Modulen.

Zertifizierung

HashiCorp Terraform Associate (003):

  • Zertifizierungs-Tutorials: 37 Tutorials zur Vorbereitung
  • Exam Preparation Guide: Umfassender Vorbereitungsleitfaden

Die offizielle Zertifizierung validiert deine Terraform-Kenntnisse und ist in der Industrie anerkannt.

💡 Tipp: Beginne mit der offiziellen Dokumentation und den kostenlosen Ressourcen. Sobald du praktische Erfahrungen gesammelt hast, sind die kostenpflichtigen Bücher eine wertvolle Investition für tieferes Verständnis.

Mit diesen Ressourcen hast du alles, was du brauchst, um deine Terraform-Kenntnisse kontinuierlich zu erweitern und in der Praxis anzuwenden. Die Kombination aus offizieller Dokumentation, Community-Support und praktischen Übungen wird dich schnell zum Terraform-Experten machen.

Fazit

Mit der Beherrschung der erweiterten HCL-Syntax hast du das Fundament für professionelle Terraform-Entwicklung gelegt. Komplexe Datenstrukturen, intelligente Validierungen, sichere Credential-Verwaltung und mächtige Built-in-Funktionen verwandeln deine Infrastruktur-Definitionen von statischen Skripten in dynamische, selbst-adaptierende Systeme.

Diese Sprach-Expertise ist die Voraussetzung für alle weiteren Terraform-Herausforderungen. Du schreibst jetzt wartbaren, wiederverwendbaren Code, der sich elegant an verschiedene Umgebungen anpasst und dabei lesbar bleibt.

💡 Kommende Inhalte: Im dritten Teil der Serie tauchen wir tief in das Terraform State Management ein - das Herzstück jeder professionellen Terraform-Implementierung. Du lernst Remote State, Locking-Mechanismen, Team-Workflows und die Best Practices, die deine Infrastruktur skalierbar und team-fähig machen.

HCL-Syntax gemeistert – Zeit für die nächste Stufe deiner Terraform-Reise.