如何将PUBLIC AWS SFTP服务器转换为使用Terraform托管的VPC

如何解决如何将PUBLIC AWS SFTP服务器转换为使用Terraform托管的VPC

我已经提交了another question,内容相似。但是由于第一个问题不一样,所以有人告诉我要创建一个单独的问题。

我已经在使用terraform创建的现有sftp服务器正在运行。它具有一个PUBLIC端点,我想将该SFTP服务器转换为由IP whitelisting托管的VPC,以保护我的服务器。

Terraform aws_transfer_server命令目前仅支持endpoint_types,例如PUBLIC或VPC_ENDPOINT。因此,我使用null_resource执行aws命令以在创建sftp服务器后更新它。我计划使用的地形片段如下:

//Existing command to create a public sftp server
resource "aws_transfer_server" "sftp" {
  identity_provider_type = "API_GATEWAY"
  url                    = aws_api_gateway_deployment.api.invoke_url
  logging_role           = aws_iamrole.log_role.arn
  invocation_role        = aws_iamrole.api_exec_role.arn  
}

resource "aws_vpc_endpoint" "transfer" {
  vpc_id            = var.vpc_id
  service_name      = "com.amazonaws.${var.aws_region}.transfer.server"
  vpc_endpoint_type = "Interface"
  subnet_ids        = var.subnet_ids
  security_group_ids = [
    aws_security_group.sg.id
  ]
  tags = {
    Name        = "${var.application}-${var.environment}-vpce"
  }
}

resource "null_resource" "update_sftp_server" {
  provisioner "local-exec" {
    command = <<EOF
aws transfer update-server --server-id ${aws_transfer_server.sftp.id} --endpoint-type VPC --endpoint-details SubnetIds="${join("\",\"",var.subnet_ids)}",AddressAllocationIds="${join("\",toset(aws_eip.nlb.*.id))}",VpcId="${var.vpc_id}"
EOF
  }
  depends_on = [aws_transfer_server.sftp,aws_vpc_endpoint.transfer]
}

null_resource执行以下aws命令

aws transfer update-server --server-id s-## --endpoint-type VPC --endpoint-details SubnetIds="subnet-##","subnet-##",AddressAllocationIds="eipalloc-##","eipalloc-##",VpcId="vpc-##"

这将引发以下异常:

exit status 254. Output:
An error occurred (InvalidRequestException) when calling the UpdateServer operation: Cannot specify AddressAllocationids when updating server to EndpointType: VPC

我看到,如果我从aws命令中删除了AddressAllocationIds="${join("\",toset(aws_eip.nlb.*.id))}",它就可以工作。但是从子网到弹性IP的可用区映射不会发生,如下所示:

image

将这个PUBLIC sftp服务器转换为VPC Hosted之后,我验证了一段时间后可以运行该命令以单独包括AddressAllocationIds。但是,如果我创建了另一个null_resource并尝试运行该命令,它将给我同样的错误。有人可以帮助我实现这一目标吗?

我的环境详细信息如下:

Terraform v0.12.28
provider.aws v3.0.0
provider.null v2.1.2
aws-cli/2.0.33 Python/3.7.7 Windows/10 botocore/2.0.0dev37

解决方法

我曾与AWS Support交流过,他们建议创建一个lambda函数来执行此操作,因为此时无法使用单个aws cli命令来执行此操作。

Python中的Lambda函数已压缩并命名为update_sftp_to_vpc

import json
import boto3
import time


def isNullOrWhiteSpace(str=None):
    return not str or str.isspace()


def isEmptyList(lst=[]):
    return not lst


def lambda_handler(event,context):
    transfer_client = boto3.client('transfer')

    subnetIds = event['subnetIds']
    allocationIds = event['allocationIds']
    vpcId = event['vpcId']
    serverId = event['serverId']

    if(isEmptyList(subnetIds) or isEmptyList(allocationIds) or isNullOrWhiteSpace(vpcId) or isNullOrWhiteSpace(serverId)):
        return {
            'statusCode': 400,'body': json.dumps('subnetIds or allocationIds or vpcId or serverId is empty')
        }
    else:
        print(
            f"Get the server details of server: {serverId} to confirm if the Server is already VPC and AddressAllocationIds exist")
        serverDetails = transfer_client.describe_server(ServerId=serverId)
        endpointType = serverDetails['Server']['EndpointType']
        if(endpointType.upper() == 'VPC'):
            existingAllocationIds = serverDetails['Server']['EndpointDetails']['AddressAllocationIds']
            if(allocationIds == existingAllocationIds):
                return {
                    'statusCode': 200,'body': json.dumps(f"Server: {serverId} is already of type {endpointType} and with allocationIds: {existingAllocationIds}")
                }

        print(f"Make sure that the server: {serverId} is stopped")
        transfer_client.stop_server(ServerId=serverId)

        print(
            f"Get the server details of server: {serverId} and wait until the server is offline")
        serverDetails = transfer_client.describe_server(ServerId=serverId)
        while True:
            serverState = serverDetails['Server']['State']
            if(serverState.upper() == 'OFFLINE'):
                break
            print(
                f"Server: {serverId} state is {serverState}. Sleeping for 10 seconds until it is OFFLINE")
            time.sleep(10)
            serverDetails = transfer_client.describe_server(ServerId=serverId)

        endpointType = serverDetails['Server']['EndpointType']
        if(endpointType.upper() != 'VPC'):
            print(
                f"Update the server: {serverId} to a VPC Hosted without AllocationIds")
            transfer_client.update_server(
                EndpointDetails={
                    'SubnetIds': subnetIds,'VpcId': vpcId
                },EndpointType='VPC',ServerId=serverId
            )

        print(
            f"Get the server: {serverId} details and wait until the server is now VPC Hosted")
        serverDetails = transfer_client.describe_server(ServerId=serverId)
        while True:
            endpointType = serverDetails['Server']['EndpointType']
            if(endpointType.upper() == 'VPC'):
                break
            print(
                f"Server: {serverId} state is {serverDetails['Server']['EndpointType']}. Sleeping for 10 seconds until it is VPC")
            time.sleep(10)
            serverDetails = transfer_client.describe_server(ServerId=serverId)

        print(f"Get server: {serverId} details again for vpc endpoint details")
        serverDetails = transfer_client.describe_server(ServerId=serverId)

        vpc_client = boto3.client('ec2')
        vpcEndpointId = serverDetails['Server']['EndpointDetails']['VpcEndpointId']
        vpcEndpointDetails = vpc_client.describe_vpc_endpoints(
            VpcEndpointIds=[vpcEndpointId],Filters=[
                {
                    'Name': 'vpc-endpoint-id','Values': [vpcEndpointId]
                },])
        while True:
            currentVpcState = vpcEndpointDetails['VpcEndpoints'][0]['State']
            if(currentVpcState.lower() == 'available'):
                break
            print(
                f"VpcEndpointId: {vpcEndpointId} state is {currentVpcState}. Sleeping for 10 seconds until it is Available")
            time.sleep(10)
            vpcEndpointDetails = vpc_client.describe_vpc_endpoints(
                VpcEndpointIds=[vpcEndpointId],Filters=[
                    {
                        'Name': 'vpc-endpoint-id','Values': [vpcEndpointId]
                    },])

        print(f"Update the server: {serverId} again with AddressAllocationIds")
        transfer_client.update_server(
            EndpointDetails={
                'AddressAllocationIds': allocationIds,'SubnetIds': subnetIds,'VpcId': vpcId
            },ServerId=serverId
        )

        print(f"Start the server: {serverId}")
        transfer_client.start_server(ServerId=serverId)

        return {
            'statusCode': 200,'body': json.dumps(f"Sftp server {serverId} is converted to use the VpcId: {vpcId}")
        }

然后我要做的就是从我的terraform中调用如下的lambda函数。

resource "aws_lambda_function" "update_sftp_to_vpc" {
  function_name    = "update-sftp-to-vpc"
  description      = "Update sftp server to vpc hosted for IP Whitelisting"
  role             = alks_iamrole.update_server_lambda.arn
  runtime          = "python3.8"
  handler          = "lambda_function.lambda_handler"
  timeout          = "180" //Set to 3 mins. It may need to be revisited if the sftp server takes more time to stop
  filename         = "update_sftp_to_vpc.zip"
  source_code_hash = filebase64sha256("update_sftp_to_vpc.zip")
}

data "aws_lambda_invocation" "update_public_sftp_server_to_vpc" {
  function_name = aws_lambda_function.update_sftp_to_vpc.function_name

  input      = <<JSON
{
  "serverId":"${aws_transfer_server.sftp.id}","vpcId":"${var.vpc_id}","subnetIds":${format("[\"%s\"]",join("\",\"",var.subnet_ids))},"allocationIds":${format("[\"%s\"]",toset(aws_eip.nlb.*.id)))}
}
JSON
  depends_on = [aws_transfer_server.sftp,aws_lambda_function.update_sftp_to_vpc]
}

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-