Hi, I'm trying to create a Batch job definition th...
# aws
p
Hi, I'm trying to create a Batch job definition that would use ARM64 platform on ECS/Fargate. The supported has recently been added (see this issue). I've built the ARM64 version of the Docker image. This is the relevant part of the Pulumi program for job definition:
Copy code
job_definition = aws.batch.JobDefinition(
    "xxx",
    type="container",
    platform_capabilities=["FARGATE"],
    container_properties=pulumi.Output.all(
        image=image.image_name,
        execution_role=job_execution_role.arn,
        job_role=job_role.arn,
        log_group=job_log_group.name,
        ...
    ).apply(
        lambda args: json.dumps(
            {
                "command": ["xxx"],
                "image": args["image"],
                "logConfiguration": {
                    "logDriver": "awslogs",
                    "options": {
                        "awslogs-group": args["log_group"],
                        "awslogs-multiline-pattern": (
                            "^(NOTSET|DEBUG|INFO|WARNING|ERROR|CRITICAL)"
                        ),
                    },
                },
                "fargatePlatformConfiguration": {"platformVersion": "LATEST"},
                "runtimePlatform": {
                    "cpuArchitecture": "ARM64",
                    "operatingSystemFamily": "LINUX",
                },
                "resourceRequirements": [
                    {"type": "VCPU", "value": "4"},
                    {"type": "MEMORY", "value": "30720"},
                ],
                "environment": [
                    {"name": "xxx", "value": "xxx"},
                    ...
                ],
                "secrets": [
                    {
                        "name": "xxx",
                        "valueFrom": args["xxx"],
                    },
                    ...
                ],
                "executionRoleArn": args["execution_role"],
                "jobRoleArn": args["job_role"],
            }
        )
    ),
    retry_strategy=aws.batch.JobDefinitionRetryStrategyArgs(
        attempts=5,
        evaluate_on_exits=[
            aws.batch.JobDefinitionRetryStrategyEvaluateOnExitArgs(
                on_status_reason="ResourceInitializationError:*",
                action="RETRY",
            ),
            aws.batch.JobDefinitionRetryStrategyEvaluateOnExitArgs(
                on_status_reason="Rate limit exceeded*", action="RETRY"
            ),
            aws.batch.JobDefinitionRetryStrategyEvaluateOnExitArgs(
                on_status_reason="Timeout waiting for network interface*",
                action="RETRY",
            ),
            aws.batch.JobDefinitionRetryStrategyEvaluateOnExitArgs(
                on_reason="*", action="EXIT"
            ),
        ],
    ),
    timeout=aws.batch.JobDefinitionTimeoutArgs(
        attempt_duration_seconds=172800
    ),
    tags={"user:Version": "..."},
    propagate_tags=True,
)
However, when the job definition is created, the
runtimePlatform
is missing in the job definition configuration seen in the AWS console, and the job fails with
exec format error
due to mismatch between the Docker image architecture and runtime platform. I thought, given that
container_properties
is passed as a plain JSON, that this would work right away after the support is added by AWS. Also, as mentioned in the issue, it works with Terraform already. Or do I have to wait for the new
pulumi-aws
/
pulumi-terraform-bridge
package?
Any info about this?
m
Hi @proud-art-41399, I can’t see the problem at first glance. Would you be able to open an issue at https://github.com/pulumi/pulumi-aws/issues, ideally with a complete program to reproduce the issue? We usually triage new issues within 24h.
p
Hi @melodic-tomato-39005, thanks for getting back to it. Here's the issue I created, hopefully it contains everything that is needed.