Skip to main content

Introduction to DynamoDB Shell

Introduction to DynamoDB Shell

Introduction to DynamoDB Shell

Amazon DynamoDB is a Serverless, key-value No-SQL database that is fully managed by AWS. Recently, AWS announced the release of ddbsh, a…

Introduction to DynamoDB Shell

AWS Dynamo DB

Amazon DynamoDB is a Serverless, key-value No-SQL database that is fully managed by AWS. Recently, AWS announced the release of ddbsh, a command line interface that was inspired by similar projects like MySQL CLIs. With ddbsh, users can easily input SQL-like commands that will be automatically translated to DynamoDB queries. This new tool offers a simple command line interface and supports a wide range of Data Definition Language (DDL) and Data Manipulation Language (DML) commands, making it a versatile and useful addition to DynamoDB’s ecosystem.

In order to use the new CLI, we need to follow a few steps. Cmake is required for building the project, if you don’t have it, get it here.

We need to build AWS C++ SDK from the source code, ddbsh make process requires this step.

// clone the aws sdk cpp repo
git clone https://github.com/aws/aws-sdk-cpp.git

// create a directory for the build
mkdir aws-sdk-build

// change directory to the source repo
cd aws-sdk-cpp

// update submodules recursively
git submodule update --init --recursive

// change directory to the build folder
cd ../aws-sdk-build

// Generate make file via cmake
cmake ../aws-sdk-cpp -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH=/usr/local/ \
-DCMAKE_INSTALL_PREFIX=/usr/local/ \
-DBUILD_ONLY="dynamodb" \
-DBUILD_SHARED_LIBS=OFF \
-DENABLE_TESTING=OFF \
-DFORCE_SHARED_CRT=OFF

// make
make

// install
sudo make install

Now we can build the ddbsh from the source code

// clone the repo
git clone https://github.com/awslabs/dynamodb-shell.git

// change directory to the source repo
cd dynamodb-shell

// make directory
mkdir build

// change directory to the new folder
cd build

// Generate make file via cmake
cmake ../ddbsh -DCMAKE_BUILD_TYPE=Release

// make
make

// install
sudo make install

Now the installation has been completed, we can start executing commands.

$ ./ddbsh
us-east-1>

// ddbsh will use the default region specificed in the global config file

$ us-east-1> connect us-west-2;
CONNECT
us-west-2>

We’ve successfully connected and changed the region. We can create a new table and add some records.

Before we create anything let’s explore the explain command, which will show the underlying commands that will be executed;

$ explain create table dogs ( id string, name string ) primary key ( id hash ) billing mode provisioned ( 5 rcu, 5 wcu ) gsi ( namegsi on (name hash) projecting all billing mode provisioned ( 5 rcu, 5 wcu ));

CreateTable({
"AttributeDefinitions": [{
"AttributeName": "id",
"AttributeType": "S"
}, {
"AttributeName": "name",
"AttributeType": "S"
}],
"TableName": "dogs",
"KeySchema": [{
"AttributeName": "id",
"KeyType": "HASH"
}],
"GlobalSecondaryIndexes": [{
"IndexName": "namegsi",
"KeySchema": [{
"AttributeName": "name",
"KeyType": "HASH"
}],
"Projection": {
"ProjectionType": "ALL"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
}
}],
"BillingMode": "PROVISIONED",
"ProvisionedThroughput": {
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"TableClass": "STANDARD"
})

Let’s create the table

$ create table dogs ( id string, name string ) primary key ( id hash ) billing mode provisioned ( 5 rcu, 5 wcu ) gsi ( namegsi on (name hash) projecting all billing mode provisioned ( 5 rcu, 5 wcu ));
CREATE

We can list the existing tables via the show command.

$ show tables
dogs | ACTIVE | PROVISIONED | STANDARD | f7b0baf5-3669-4283-ba32-cef7c5861b7e | arn:aws:dynamodb:us-west-2:275035559758:table/dogs | TTL DISABLED | GSI: 1 | LSI : 0

Now that the table has been created, let’s add some records.

$ insert into dogs ( id, name, age, breed, weight, weightType, gender) 
values ( "bWOP", "Max", 5, "German Shepherd", 130, "lbs", "male" ),
( "hTwZ", "Chop", 2, "Rottweiler", 150, "lbs", "male" ),
( "VXZq", "Luna", 8, "Mixed Breed", 60, "lbs", "female" ),
( "rvXp", "Brutus", 10, "St. Bernard", 200, "lbs", "male" ),
( "ry2e", "Riley", 15, "Mixed Breed", 90, "lbs", "female");

INSERT
INSERT
INSERT
INSERT
INSERT

We can play around with the data now.

$ select * from dogs;
{age: 10, breed: "St. Bernard", gender: male, id: rvXp, name: Brutus, weight: 200, weightType: lbs}
{age: 5, breed: "German Shepherd", gender: male, id: bWOP, name: Max, weight: 130, weightType: lbs}
{age: 15, breed: "Mixed Breed", gender: female, id: ry2e, name: Riley, weight: 90, weightType: lbs}
{age: 2, breed: Rottweiler, gender: male, id: hTwZ, name: Chop, weight: 150, weightType: lbs}
{age: 8, breed: "Mixed Breed", gender: female, id: VXZq, name: Luna, weight: 60, weightType: lbs}

$ select * from dogs where gender = "male";
{age: 10, breed: "St. Bernard", gender: male, id: rvXp, name: Brutus, weight: 200, weightType: lbs}
{age: 5, breed: "German Shepherd", gender: male, id: bWOP, name: Max, weight: 130, weightType: lbs}
{age: 2, breed: Rottweiler, gender: male, id: hTwZ, name: Chop, weight: 150, weightType: lbs}

$ select * from dogs where age > 8;
{age: 10, breed: "St. Bernard", gender: male, id: rvXp, name: Brutus, weight: 200, weightType: lbs}
{age: 15, breed: "Mixed Breed", gender: female, id: ry2e, name: Riley, weight: 90, weightType: lbs}

We can specify to use Global Secondary Index(GSI) we’ve added during the table creation.

$ select * from dogs.namegsi where name = "Brutus";
{age: 10, breed: "St. Bernard", gender: male, id: rvXp, name: Brutus, weight: 200, weightType: lbs}

Let’s update a record.

$ update dogsset age= 11 where name = "Brutus";
UPDATE

Delete a record.

$ delete from dogs where name = "Brutus";
DELETE

For further information and, please refer to project’s Github page. https://github.com/awslabs/dynamodb-shell/blob/main/README.md

Once you are done with this tutorial, don’t forget to delete the table we’ve created.

$ drop table dogs;
DROP

// nothing will show as we don't have any tables left
$ show tables;

Overall, ddsh offers excellent tooling for working with DynamoDB, enabling developers to interact with the database directly from their command lines. However, it is not yet recommended for use in production environments. We eagerly anticipate future updates from AWS to further improvements.

Are you ready to enhance your AWS Cloud journey? Head over to our website and book a free consultation call.

Popular posts from this blog

Concurrency With Boto3

Concurrency with Boto3 Concurrency with Boto3 Asyncio provides set of tools for concurrent programming in Python. In a very simple sense it does this by having an event loop execute a… Concurrency in Boto3 Asyncio provides a set of tools for concurrent programming in Python . In a very simple sense, it does this by having an event loop execute a collection of tasks, with a key difference being that each task chooses when to yield control back to the event loop. Asyncio is a good fit for IO-bound and high-level structured network code. Boto3 (AWS Python SDK) falls into this category. A lot of existing libraries are not ready to be used with asyncio out of the box. They may block, or depend on concurrency features not available through the module. It’s still possible to use those libraries in an application based on asyncio by using an executor from concurrent.futures to run the code either in a separate thread or a separate process. The run_in_executor() method of the event...

AWS Lambda Function URLs

AWS Lambda Function URLs AWS Lambda Function URLs AWS Lambda is a Serverless computing service offered by Amazon Web Services (AWS) that allows developers to run code without provisioning… AWS Lambda Function URLs AWS Lambda AWS Lambda is a Serverless computing service offered by Amazon Web Services ( AWS ) that allows developers to run code without provisioning or managing servers. In this tutorial, we will explore AWS Lambda Function URLs , which are the endpoints that allow you to invoke your Lambda functions. AWS Lambda Function URLs are unique HTTP endpoints that you can create using AWS Console, SDK or any other IaC tool. These URLs are used to trigger your Lambda function, and they can be integrated with a variety of workloads. Function URLs are dual stack-enabled, supporting IPv4 and IPv6. After you configure a function URL for your function, you can invoke your function through its HTTP(S) endpoint via a web browser, curl, Postman, or any HTTP client. Once you create ...

DNS Failover with Route53

DNS Failover with Route53 DNS Failover with Route53 Route 53‘s DNS Failover feature gives you the power to monitor your website and automatically route your visitors to a backup site if it… DNS Failover with Route53 Route 53 ‘s DNS Failover feature gives you the power to monitor your website and automatically route your visitors to a backup site if the main target is not healthy. To showcase this feature, we are going to deploy an application, which we built in this blog post , to two different AWS regions. We are also going to set active-passive failover in Route53, then we will remove the application from one region and we’ll observe how DNS queries will react to the changes. AWS describes the failover scenarios in 3 different categories Active-passive : Route 53 actively returns a primary resource. In case of failure, Route 53 returns the backup resource. Configured using a failover policy. Active-active : Route 53 actively returns more than one resource. In case of failure...