Skip to content

CodePipeline

CodePipeline is a continuous integration/continuous delivery (CI/CD) service offered by AWS. CodePipeline can be used to create automated pipelines that handle the build, test and deployment of software.

LocalStack comes with a bespoke execution engine that can be used to create, manage, and execute pipelines. It supports a variety of actions that integrate with S3, CodeBuild, CodeConnections, and more. The available operations can be found on the API coverage page.

In this guide, we will create a simple pipeline that fetches an object from an S3 bucket and uploads it to a different S3 bucket. It is for users that are new to CodePipeline and have a basic knowledge of the AWS CLI and the awslocal wrapper.

Start LocalStack using your preferred method.

Begin by creating the S3 buckets that will serve as the source and target.

Terminal window
awslocal s3 mb s3://source-bucket
awslocal s3 mb s3://target-bucket

It is important to note the CodePipeline requires source S3 buckets to have versioning enabled. This can be done using the S3 PutBucketVersioning operation.

Terminal window
awslocal s3api put-bucket-versioning \
--bucket source-bucket \
--versioning-configuration Status=Enabled

Now create a placeholder file that will flow through the pipeline and upload it to the source bucket.

Terminal window
echo "Hello LocalStack!" > file
awslocal s3 cp file s3://source-bucket

Pipelines also require an artifact store, which is also an S3 bucket that is used as intermediate storage.

Terminal window
awslocal s3 mb s3://artifact-store-bucket

Depending on the specifics of the declaration, CodePipeline pipelines need access other AWS services. In this case we want our pipeline to retrieve and upload files to S3. This requires a properly configured IAM role that our pipeline can assume.

Create the role and make note of the role ARN:

role.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codepipeline.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}

Create the role with the following command:

Terminal window
awslocal iam create-role --role-name role --assume-role-policy-document file://role.json | jq .Role.Arn

Now add a permissions policy to this role that permits read and write access to S3.

policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "*"
}
]
}

The permissions in the above example policy are relatively broad. You might want to use a more focused policy for better security on production systems.

Terminal window
awslocal iam put-role-policy --role-name role --policy-name policy --policy-document file://policy.json

Now we can turn our attention to the pipeline declaration.

A pipeline declaration is used to define the structure of actions and stages to be performed. The following pipeline defines two stages with one action each. There is a source action which retrieves a file from an S3 bucket and marks it as the output. The output is placed in the intermediate bucket until it is picked up by the action in the second stage. This is a deploy action which uploads the file to the target bucket.

Pay special attention to roleArn, artifactStore.location as well as S3Bucket, S3ObjectKey, and BucketName. These correspond to the resources we created earlier.

declaration.json
{
"name": "pipeline",
"executionMode": "SUPERSEDED",
"pipelineType": "V1",
"roleArn": "arn:aws:iam::000000000000:role/role",
"artifactStore": {
"type": "S3",
"location": "artifact-store-bucket"
},
"version": 1,
"stages": [
{
"name": "stage1",
"actions": [
{
"name": "action1",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "S3",
"version": "1"
},
"runOrder": 1,
"configuration": {
"S3Bucket": "source-bucket",
"S3ObjectKey": "file",
"PollForSourceChanges": "false"
},
"outputArtifacts": [
{
"name": "intermediate-file"
}
],
"inputArtifacts": []
}
]
},
{
"name": "stage2",
"actions": [
{
"name": "action1",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "S3",
"version": "1"
},
"runOrder": 1,
"configuration": {
"BucketName": "target-bucket",
"Extract": "false",
"ObjectKey": "output-file"
},
"inputArtifacts": [
{
"name": "intermediate-file"
}
],
"outputArtifacts": []
}
]
}
]
}

Create the pipeline using the following command:

Terminal window
awslocal codepipeline create-pipeline --pipeline file://./declaration.json

A ‘pipeline execution’ is an instance of a pipeline in a running or finished state.

The CreatePipeline operation we ran earlier started a pipeline execution. This can be confirmed using:

Terminal window
awslocal codepipeline list-pipeline-executions --pipeline-name pipeline
Output
{
"pipelineExecutionSummaries": [
{
"pipelineExecutionId": "37e8eb2e-0ed9-447a-a016-8dbbd796bfe7",
"status": "Succeeded",
"startTime": 1745486647.138571,
"lastUpdateTime": 1745486648.290341,
"trigger": {
"triggerType": "CreatePipeline"
},
"executionMode": "SUPERSEDED"
}
]
}

Note the trigger.triggerType field specifies what initiated the pipeline execution. Currently in LocalStack, only two triggers are implemented: CreatePipeline and StartPipelineExecution.

The above pipeline execution was successful. This means that we can retrieve the output-file object from the target-bucket S3 bucket.

Terminal window
awslocal s3 cp s3://target-bucket/output-file output-file

To verify that it is the same file as the original input:

Terminal window
cat output-file

The output will be:

Hello LocalStack!

Using the ListActionExecutions, detailed information about each action execution such as inputs and outputs can be retrieved. This is useful when debugging the pipeline.

Terminal window
awslocal codepipeline list-action-executions --pipeline-name pipeline
Output
{
"actionExecutionDetails": [
{
"pipelineExecutionId": "37e8eb2e-0ed9-447a-a016-8dbbd796bfe7",
"actionExecutionId": "e38716df-645e-43ce-9597-104735c7f92c",
"pipelineVersion": 1,
"stageName": "stage2",
"actionName": "action1",
"startTime": 1745486647.269867,
"lastUpdateTime": 1745486647.289813,
"status": "Succeeded",
"input": {
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "S3",
"version": "1"
},
"configuration": {
"BucketName": "target-bucket",
"Extract": "false",
"ObjectKey": "output-file"
},
"resolvedConfiguration": {
"BucketName": "target-bucket",
"Extract": "false",
"ObjectKey": "output-file"
},
"region": "eu-central-1",
"inputArtifacts": [
{
"name": "intermediate-file",
"s3location": {
"bucket": "artifact-store-bucket",
"key": "pipeline/intermediate-file/01410aa4.zip"
}
}
]
},
"output": {
"outputArtifacts": [],
"executionResult": {
"externalExecutionId": "bcff0781",
"externalExecutionSummary": "Deployment Succeeded"
},
"outputVariables": {}
}
},
{
"pipelineExecutionId": "37e8eb2e-0ed9-447a-a016-8dbbd796bfe7",
"actionExecutionId": "ae99095a-1d43-46ee-8a48-c72b6d60021e",
"pipelineVersion": 1,
"stageName": "stage1",
"actionName": "action1",
...

The operations CreatePipeline, GetPipeline, UpdatePipeline, ListPipelines, DeletePipeline are used to manage pipeline declarations.

LocalStack supports emulation for V1 pipelines. V2 pipelines are only created as mocks.

Pipeline executions can be managed with:

When stopping pipeline executions with StopPipelineExecution, the stop and abandon method is not supported. Setting the abandon flag will have no impact. This is because LocalStack uses threads as the underlying mechanism to simulate pipelines, and threads cannot be cleanly preempted.

Action executions can be inspected using the ListActionExecutions operation.

Pipelines resources can be tagged using the following operations:

Tag the pipeline with the following command:

Terminal window
awslocal codepipeline tag-resource \
--resource-arn arn:aws:codepipeline:eu-central-1:000000000000:pipeline \
--tags key=purpose,value=tutorial
awslocal codepipeline list-tags-for-resource \
--resource-arn arn:aws:codepipeline:eu-central-1:000000000000:pipeline
Output
{
"tags": [
{
"key": "purpose",
"value": "tutorial"
}
]
}

Untag the pipeline with the following command:

Terminal window
awslocal codepipeline untag-resource \
--resource-arn arn:aws:codepipeline:eu-central-1:000000000000:pipeline \
--tag-keys purpose

CodePipeline on LocalStack supports variables which allow dynamic configuration of pipeline actions.

Actions produce output variables which can be referenced in the configuration of subsequent actions. Make note that only when the action defines a namespace, its output variables are availabe to downstream actions.

CodePipeline’s variable placeholder syntax is as follows:

#{namespace.variable}

As with AWS, LocalStack only makes the codepipeline.PipelineExecutionId variable available by default in a pipeline.

You can use runOrder to control parallel or sequential order of execution of actions.

The supported actions in LocalStack CodePipeline are listed below. Using an unsupported action will make the pipeline fail. If you would like support for more actions, please raise a feature request.

The CloudFormation Deploy action executes a CloudFormation stack. It supports the following modes: CREATE_UPDATE, CHANGE_SET_REPLACE, CHANGE_SET_EXECUTE

The CodeBuild Source and Test action can be used to start a CodeBuild container and run the given buildspec.

The CodeConnections Source action is used to specify a VCS repo as the input to the pipeline.

LocalStack supports integration only with GitHub at this time. Please set the environment configuration option CODEPIPELINE_GH_TOKEN with the GitHub Personal Access Token to be able to fetch private repositories.

The ECR Source action is used to specify an Elastic Container Registry image as a source artifact.

The ECS CodeDeply Blue/Green action is used to deploy container application using a blue/green deployment.

LocalStack does not accurately emulate a blue/green deployment due to limitations in ELB and ECS. It will only update the running ECS service with a new task definition and wait for the service to be stable.

The ECS Deploy action creates a revision of a task definition based on an already deployed ECS service.

The Lambda Invoke action is used to execute a Lambda function in a pipeline.

The Manual Approval action can be included in the pipeline declaration but it will only function as a no-op.

The S3 Deploy action is used to upload artifacts to a given S3 bucket as the output of the pipeline.

The S3 Source action is used to specify an S3 bucket object as input to the pipeline.

OperationImplementedImage
Page 1 of 0