-
-
Notifications
You must be signed in to change notification settings - Fork 56
Description
Description
Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.
If your request is for a new feature, please use the Feature request template.
- ✋ I have searched the open/closed issues and my issue is not listed.
⚠️ Note
Before you submit an issue, please perform the following first:
- Remove the local
.terraformdirectory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/ - Re-initialize the project root to pull down modules:
terraform init - Re-attempt your terraform plan or apply and check if the issue still persists
Versions
-
Module version [Required]: Latest
-
Terraform version: v1.13.3
- Provider version(s): hashicorp/aws v6.18.0
Reproduction Code [Required]
module "elasticache" {
source = "../../"
replication_group_id = local.name
create_primary_global_replication_group = true
engine = "valkey"
engine_version = "7.2"
node_type = "cache.r6g.large"
snapshot_retention_limit = 7
port = 6399
at_rest_encryption_enabled = true
transit_encryption_enabled = true
transit_encryption_mode = "required"
auth_token = "supersecure"
auth_token_update_strategy = "SET"
kms_key_arn = "dummy-kms"
cluster_mode = "enabled"
cluster_mode_enabled = true
multi_az_enabled = true
automatic_failover_enabled = true
auto_minor_version_upgrade = false
network_type = "ipv4"
num_node_groups = 4
replicas_per_node_group = 2
maintenance_window = "fri:20:30-fri:21:30"
snapshot_window = "19:30-20:30"
apply_immediately = true
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
subnet_ids = data.terraform_remote_state.vpc.outputs.subnets
subnet_group_name = local.name
subnet_group_description = "${title(local.name)} subnet group"
create_security_group = true
security_group_rules = {
ingress_vpc = {
from_port = 6399
to_port = 6399
description = "xxxx"
prefix_list_id = "xxx"
}
}
security_group_description = "xxxx"
security_group_tags = merge({ "Name" = "${local.name}-sg" }, local.tags)
create_parameter_group = true
parameter_group_name = local.name
parameter_group_family = "valkey7"
parameter_group_description = "${title(local.name)} parameter group"
parameters = [
{
name = "latency-tracking"
value = "yes"
}
]
log_delivery_configuration = {
slow-log = {
destination_type = "cloudwatch-logs"
log_format = "json"
}
}
tags = local.tags
}
module "elasticache_secondary" {
source = "../../"
providers = { aws = aws.secondary }
create_secondary_global_replication_group = true
global_replication_group_id = module.elasticache.global_replication_group_id
replication_group_id = local.name
snapshot_retention_limit = 7
port = 6399
auth_token = "supersecure"
auth_token_update_strategy = "SET"
kms_key_arn = local.dr_kms_key
cluster_mode = "enabled"
cluster_mode_enabled = true
multi_az_enabled = true
network_type = "ipv4"
num_node_groups = 4
replicas_per_node_group = 2
maintenance_window = "fri:20:30-fri:21:30"
snapshot_window = "19:30-20:30"
apply_immediately = true
vpc_id = data.vpc_secondary.outputs.vpc_id
subnet_ids = data.vpc_secondary.outputs.dubnets
subnet_group_name = local.name
subnet_group_description = "${title(local.name)} subnet group"
create_security_group = true
security_group_rules = {
ingress_vpc = {
from_port = 6399
to_port = 6399
description = "xxxx"
prefix_list_id = "xxxx"
}
}
security_group_description = "xxxx"
security_group_tags = merge({ "Name" = "${local.name}-sg" }, local.tags)
create_parameter_group = true
parameter_group_name = local.name
parameter_group_family = "valkey7"
parameter_group_description = "${title(local.name)} parameter group"
parameters = [
{
name = "latency-tracking"
value = "yes"
}
]
log_delivery_configuration = {
slow-log = {
destination_type = "cloudwatch-logs"
log_format = "json"
}
}
tags = local.tags
}
Steps to reproduce the behavior:
yes create valkey global cluster in 2 regions with 4 shards(num_node_groups) and try to reduce the shards to 1 in above module "elasticache"Expected behavior
no. of shards to be reduced from 4 to 1 in both clusters in each regionActual behavior
Error: modifying ElastiCache Replication Group (abcd-valkey) shard configuration: operation error ElastiCache: ModifyReplicationGroupShardConfiguration, https response error StatusCode: 400, RequestID: , InvalidParameterValue: Cluster [abcd-valkey] is part of a global cluster [erpgt-abcd-valkey]. Request rejected.