Lambda Functions Part 2

In order to learn the ropes a little more, I decided to try a worked Lambda example. Recall that last time the build was adjusted to suit amazon EC2, so now I need to learn a little more about the infrastructure, in order to deploy and run in a real scenario. I’m anticipating that the best pattern will be to use S3 containers in conjunction with Lambda functions, the latter responding to event data generated when input is uploaded by users for processing, output sent to a different bucket for download. In totality this will comprise the heavy lifting of the p2t backend.

As it turns out, there is a fairly complete example on using S3 containers with Lambda functions, you can find more details here (at the time of writing). I made a few key changes along the way, detailed below. First step, I installed the amazon command line interpretor (CLI). This will be essential going forward; in a real operational environment much will be accomplished by scripting. Assuming python 2.7 (ish) and pip are available, installation in Ubuntu is a simple step:

$ sudo pip install awscli

In order to use the CLI to do pretty much anything of worth, a method of authentication when interacting with AWS resources must be established, via the Identity Access Management (IAM) console. In creating an AWS account, you essentially create a root user. As with *NIX in general, it’s safer to operate with the lowest level of access needed. So via the IAM console I create a basic admin user with identity and key, which must be supplied to the CLI, thusly:

$ aws configure
AWS Access Key ID [****************KHAA]:
AWS Secret Access Key [****************dTr0]:
Default region name [us-east-1]:
Default output format [json]:

Another key preparation step was to create a couple of S3 buckets, again readily handled for now via the AWS console, using the S3 link. Note though that if you proceed through the example linked to above, to the letter, you will receive an error when trying to use the same bucket names. Apparently bucket names are in a global namespace, shared by all users. Indeed after creation there is a unique URL attached to your buckets. So I created two new ones with (presumably) unused names, with the same basename, the output bin with suffix *out. You can find the source code for the lambda function as you work through the example too, and you will need to change the code to suit your new bucket names. For example, since I’ll be using the node.js runtime, I edited these lines in CreateThumbnail.js:

var dstBucket = srcBucket + "out";
var dstKey = "out-" + srcKey;

Last but not least, one needs to generate an Amazon Resource Name (ARN), the manner in which Amazon resources are uniquely identified. ARNs have one of the following formats :

arn:partition:service:region:account:resource
arn:partition:service:region:account:resourcetype/resource
arn:partition:service:region:account:resourcetype:resource

Here also I found the need to depart from the example just a little. At the time of writing, the instructions don’t state this consistently (they do initially however), but one does need to specify that the Lambda function requires access to S3. So keep this mind when creating the role from the left frame of the IAM console. I used a canned example produced when playing with a more elementary Lambda function example. Regardless of how you produced it, your ARN should look something like:

arn:aws:iam::48560xxxxxxx:role/lambda_s3_exec_role

Okay. So now we more or less have the ingredients needed to proceed with the Lambda function + S3 recipe, established in the next post.