Sign in
Log inSign up

A Completely Serverless Application in Terraform Day 1: Creating a DynamoDB Table

AJ's photo
AJ
·Mar 26, 2021·

4 min read

Today I'm going to kick off a series on building a serverless application entirely in Terraform. We'll break this down into several bite-size chunks, so you can complete it in just a few minutes a day.

Day 1 - Create the DynamoDB Day 2 - Create Lambda function Day 3 - Create Cognito service Day 4 - Deploy Test App

In a previous post, I talked about the benefits and the how-to of storing your Terraform state in Gitlab. If you have not checked that out, I recommend you do that before you get too heavily into Terraform development, as this will be a huge time-saver for you.

Let's start by modeling what our DynamoDB will look like.

For this application, we're going to create a table of movie titles and links to various sites to purchase copies of these movies. We'll store as much metadata as we can on each titles, like publisher, description, release date, retail price, etc.

Let's take this example. One of my favorite shows is Cowboy Bebop. Let's use this show as our model.

[
{
  "title":"Cowboy Bebop Complete Series Blu-ray",
  "description":"Cowboy Bebop complete series collection contains episodes 1-26 of the anime directed by Shinichiro Watanabe.",
  "publisher":"Funimation",
  "ISBN":"704400090554",
  "languages":["english","japanese"],
  "releasedate":"2021-03-16 08:51:00.000",
  "retailprice":"59.98",
  "store_info": [
    { "store":"Amazon", "link":"amazon.com/dp/B00NP06DJE"},
    { "store":"Best Buy", "link":"bestbuy.com/site/cowboy-bebop-complete-ser…?"},
    { "store":"Right Stuf Anime", "link":"rightstufanime.com/Cowboy-Bebop-Blu-ray-Co…"},
    { "store":"Target", "link":"target.com/p/cowboy-bebop-the-complete-ser…"}
    ]
}
]

Admittedly I am not an expert in data modeling, but I think this will work, at least for our example.

We'll create the DynamoDB using Terraform. I covered how to setup Terraform remote state and CI/CD pipeline in a previous post.

It's important to note, especially if you're new to NoSQL and come from a SQL background, you do not have to declare all of the "columns" of a table. You declare a partition key and a sort key. Read the docs for more details.

We'll use "ISBN" as our partition key and "title" as our sort key. The reason for this is, while it may seem unlikely that the title will change, the ISBN will definitely NOT change for this title, lest we create a new item. So let's design this simple input.

Here is the Terraform code that we will use to deploy our DynamoDB.

resource "aws_dynamodb_table" "titles" {
  name           = "titles"
  billing_mode   = "PROVISIONED"
  read_capacity  = 5
  write_capacity = 5
  hash_key       = "isbn"
  range_key      = "title"

  attribute {
    name = "isbn"
    type = "S"
  }

  attribute {
    name = "title"
    type = "S"
  }

Quick note: I'm defining read/write capacity as PROVISIONED at 5 for testing purposes. You may be tempted to set it to PAY_PER_REQUEST for testing purposes, but this is a BAD IDEA. PPR can costs as much as 5x the cost as provisioned. If you're concerned about a lot of input/output, consider running DynamoDB locally.

Now, run this TF code in our pipeline and experience the joy of great success.

If you've previously setup your AWS CLI, you should be able to quickly query dynamodb for a list of tables

$ aws dynamodb list-tables

{
    "TableNames": [
        "titles"
    ]
}

Now we have to load this data. Google is my friend in this situation, so I'll borrow the Python script to load data directly from the documentation. Prepare your virtualenv with a simple pipenv command:

pipenv --three
pipenv install boto3

I'm going to modify the code provided by AWS just a bit to load this piece of data.

from decimal import Decimal
import json
import boto3

awsregion='us-east-2'

def load_itemdata(data, dynamodb=None, region=None):
    if not dynamodb:
        dynamodb = boto3.resource('dynamodb', endpoint_url="localhost:8000")
    else:
        dynamodb = boto3.resource('dynamodb', region_name=region)

    table = dynamodb.Table('titles')
    for d in data:
        title = d['title']
        isbn = d['isbn']
        print("Adding title:", title, isbn)
        table.put_item(Item=d)


if __name__ == '__main__':
    with open("./data/items.json") as jsondata:
        itemlist = json.load(jsondata)
    load_itemdata(itemlist, True, awsregion)

Now we've got our single item of data inside of our DynamoDB in the cloud.

Next time, we'll setup a lambda function and an API frontend to call this data, then we'll display it to a web page.

Hassle-free blogging platform that developers and teams love.
  • Docs by Hashnode
    New
  • Blogs
  • AI Markdown Editor
  • GraphQL APIs
  • Open source Starter-kit

© Hashnode 2024 — LinearBytes Inc.

Privacy PolicyTermsCode of Conduct