Using AWS SDK for Go V2 with DynamoDB

In this article we will connect with a locally running DynamoDB instance using AWS SDK for Go's latest version. One of the many benefits of this simple application is that it provides a development environment to practice making DynamoDB operations using Go and get more familiar with the SDK.

Prerequesties:

  • Go
  • Docker, Docker Compose

Steps:

  • The code repository is available on this link The application can be started by following the instructions mentioned in the Readme file.
  • Folder structure implemented by me for this application is as follows:
.
├── Dockerfile
├── README.md
├── docker-compose.yaml
├── go.mod
├── go.sum
├── handler.go
├── main.go
├── screenshots
│   ├── health_screenshot.png
│   └── secret_screenshot.png
└── utils
    └── dynamodbutils.go
  • Run this command: docker run -p 8000:8000 amazon/dynamodb-local to start a docker container running DynamoDB. This running instance can be accessed on http://localhost:8000.

  • To be able to communicate with our locally running DynamoDB instance we need to create a Service client in Go(using AWS SDK) with some required configurations. Note that we do not need to provide any meaningful credentials for locally running DynamoDB. In the utils.go file, create a local client like this:

        func CreateLocalClient() *dynamodb.Client {
        cfg, err := config.LoadDefaultConfig(context.TODO(),
        	config.WithRegion("us-east-1"),
        	config.WithEndpointResolver(aws.EndpointResolverFunc(
        		func(service, region string) (aws.Endpoint, error) {
        			return aws.Endpoint{URL: "http://db:8000"}, nil
        		})),
        	config.WithCredentialsProvider(credentials.StaticCredentialsProvider{
        		Value: aws.Credentials{
        			AccessKeyID: "dummy", SecretAccessKey: "dummy", SessionToken: "dummy",
        			Source: "Hard-coded credentials; values are irrelevant for local DynamoDB",
        		},
        	}),
        )
        if err != nil {
        	panic(err)
        }
     
        return dynamodb.NewFromConfig(cfg)
    }

    You might be wondering why the URL value is http://db:8000? It is because we are using Docker Compose in this application to synchronize running different containers. The db is the name given to the docker container(mentioned in the docker_compose.yaml file) which is running our Dynamodb instance. This discovery of containers using names is called DNS and is independent of IP addresses. This is vital because IP addresses of the containers can change, so Docker can use name mentioned in the Compose file to keep track of the containers.

  • The next step is to create a Movies table in the database:

func (basics TableBasics) CreateMovieTable() (*types.TableDescription, error) {
	var tableDesc *types.TableDescription
	table, err := basics.DynamoDbClient.CreateTable(context.TODO(), &dynamodb.CreateTableInput{
		AttributeDefinitions: []types.AttributeDefinition{{
			AttributeName: aws.String("year"),
			AttributeType: types.ScalarAttributeTypeN,
		}, {
			AttributeName: aws.String("title"),
			AttributeType: types.ScalarAttributeTypeS,
		}},
		KeySchema: []types.KeySchemaElement{{
			AttributeName: aws.String("year"),
			KeyType:       types.KeyTypeHash,
		}, {
			AttributeName: aws.String("title"),
			KeyType:       types.KeyTypeRange,
		}},
		TableName: aws.String(basics.TableName),
		ProvisionedThroughput: &types.ProvisionedThroughput{
			ReadCapacityUnits:  aws.Int64(10),
			WriteCapacityUnits: aws.Int64(10),
		},
	})
	if err != nil {
		log.Printf("Couldn't create table %v. Here's why: %v\n", basics.TableName, err)
	} else {
		waiter := dynamodb.NewTableExistsWaiter(basics.DynamoDbClient)
		err = waiter.Wait(context.TODO(), &dynamodb.DescribeTableInput{
			TableName: aws.String(basics.TableName)}, 5*time.Minute)
		if err != nil {
			log.Printf("Wait for table exists failed. Here's why: %v\n", err)
		}
		tableDesc = table.TableDescription
	}
	return tableDesc, err
}

An important thing to note in the above code is the use of waiter. While using asynchronous AWS API's, we need to wait for the resource to become available before performing any actions on it. In our case, the CreateTable API will first return an immediate status of CREATING and once it is ready, the table status will transition to ACTIVE. In short, the waiter API does the work of checking table status for us. For more info about them, you can visit this official link.

  • Now we have our Movies table ready for action. Rest of the code in the dynamodbutils file is just CRUD functionality and a couple of helper functions.

  • In our main.go file we are initialising our service client and some other global variables like tableName & tableBasics. After initialising the global variables, we enter in our main function, check for the existence of the table, if it does'nt exist, we create the table, and add multiple items(movies) in it using a batch operation. After this write process, we create a server with two endpoints for us to experiment. The output on these endpoints is formatted by some clever json techniques which can be seen in the handler.go file. This is done just for fun and learning how to parse nested json structure.

Dockerization of the App

  • We are using multi-stage docker file and Docker Compose to containerize our application. Feel free to experiment with its configuration and to discover how different containers are able to discover each other. Docker Compose makes it very easy to do this synchronization(one can do the same without using Docker Compose to learn more about docker networking). One important thing to consider is to use the database container name(db) in creating the Service client in our code. An important thing is the concept of networking in Docker. We are making sure that both containers connect each other on a custom network. To run this application without using Docker Compose, you can start the database container with a name and network labe and start the program container on the same custom network. If the name of the database container will be used in config while creating the Service client this will work, but this approach is not recommended because with Docker Compose all this can be accomplished declaratively: we declare about the state of the containers we want and Docker Compose takes care of all things to provide that state, every single time without any scope of human errors which might happen when executing all these commands by ourselves.