AWS Lambda Guide Book - Gordon Wong
AWS Lambda Guide Book - Gordon Wong
By Gordon Wong
electronic or mechanical methods, without the prior written permission of the author,
except in the case of brief quotations embodied in critical reviews and certain other
While all attempts have been made to verify the information provided in this book,
the author does assume any responsibility for errors, omissions, or contrary
this book is for educational and entertainment purposes only. The reader is responsible for
his or her own actions and the author does not accept any responsibilities for any liabilities
The trademarks that are used are without any consent, and the publication of the
trademarks and brands within this book are for clarifying purposes only and are the
The Amazon AWS Lambda has become popular of late, and it is good for you to know
how to work with it. When using it, there are limits imposed on the use of the various
resources, and there is a need for you to know how to work with these. Permissions are
also necessary for accessing the various resources, as well as for the purpose of cross-
account access. For instance, for you to use the AWS Lambda console, your account must
have been granted some permissions. This guide explores all these. Enjoy reading!
Chapter 1- Creation of a Deployment
Package
For us to create a Lambda function, we have to first create a deployment package, which
should be a “.jar” or “.zip” file having the code and the dependencies you have created.
The zip should have only the code and its dependencies, but not the containing folder. We
need to demonstrate how this can be done in Node.js, Python, and Java.
Node.js
You can choose to write a deployment package on your own or write the code in the
Lambda console directly, in which the console will take over the role of creating the
deployment package for you and then upload it, meaning that you will have created your
Lambda function.
The steps given below can be followed for you to create a deployment package, but
outside the console. In our case, you are need to create a deployment package which
features the file named “myfile.js” and then makes use of the “async” library.
1. Launch your text editor, and then write your code in it. Save the file with the name
“myfile.js.” The file name will be used for specifying the handler when you are
creating the lambda function.
2. Make use of the “npm” package so as to install the libraries which your code will
need. An example is when your code is to use the “async” library. In such a case,
filename.js
node_modules/async
node_modules/async/lib
node_modules/async/lib/async.js
node_modules/async/package.json
4. The contents of the folder should then be zipped, and this should form your
deployment package.
After that, it is good for you to specify the name of the .zip file as the deployment
package, and this should be during the time of creation of the deployment of the package.
If you are in need of including your own libraries, it will be good for you to include the
native ones. You can do that by packing them in the zip file and then referencing them
during the process of calling them from the Node.js or from the processes which you have
started previously. The following lines should be included at the beginning of your
function code:
In this case, the deployment package can be in the form of a standalone jar file or a .zip
file. One can use maven to package their Java code at the command line to get a
deployment package. Before going further, it will be good for us to start by installing the
built tool for the maven command line. In Linux, execute the following command:
The following should be the structure of the folder once you have setup the project:
project-dir/pom.xml
below:
project-dir/src/main/java/sample/Hello.java
Once the project has been built, the .jar file will be located in the subdirectory “project-
dir/target.”
“project-dir.”
Create the following in the above directory:
The file for the project object mode, “pom.xml.” The following information
for the project and configuration details should be added for the Maven to
build our project:
<project xmlns=“http://maven.apache.org/POM/4.0.0”
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance
xsi:schemaLocation=“http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd”>
<modelVersion>4.0.0</modelVersion>
<groupId>doc-samples</groupId>
<artifactId>lambda-java-sample</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<name>lambda-java-sample</name>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.1.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<configuration>
<createDependencyReducedPom>false</createDependencyReducedPom>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
In the section for “dependencies,” the property “groupID” is the ID for Amazon AWS in
the Maven Central Repository.
project-dir/src/main/java
Under the subdirectory “/java,” add the java files and the folder structure, if you
have any. If the name for the java package is “sample,” and the source code is
“Hello.java,” the directory structure will look as follows:
project-dir/src/main/java/sample/Hello.java
This is where the deployment package will be created. We will use the Maven so as
to build our project at the command line. The following steps should be followed:
Use the necessary commands to change your directory on the command line to the
Use the following Maven command so as to build your project. Here is the
command:
$ mvn package
A .jar file will be created from the above command, and this will be saved in the
directory “project-dir/target/lambda-java-sample-1.0-SNAPSHOT.jar.”
The build will then have created a .jar file from the information provided by the file
“pom.xml” so as to perform any necessary transforms. The file will have all of the
necessary dependencies. You will then have your deployment package which you can add
Note that in our above case, we have not used any IDE so as to create a deployment
package. We want to use an IDE, and demonstrate how this can be done.
Use of Maven and Eclipse IDE (Java) for creation of a .jar Deployment Package
One can use their Eclipse IDE for java so as to package their code into a deployment
package.
Before beginning to do this, we should install a Maven plugin for the eclipse. The
click on “Add.”
After that, we can begin to work with our project so as to create the deployment package.
The following steps can be followed:
dependencies should also be added and then the project built. The .jar file which
2. Choose “New” from the “File” menu, and then select “Project.”
4. In the window for “New Maven Project”, select “Create a simple project” and then
leave the default selections as they are.
5. In the windows for “New Maven Project,” “Configure project,” type the artifact
information given below:
Version: 0.0.1-SNAPSHOT
Packaging: jar
Name: lambda-java-sample
2. Add the dependency “aws-lambda-java-core” into the file “pom.xml.” This will
provide the project with definitions for the interfaces RequestHandler,
RequestStreamHandler, and Context. With that, you will be in a position to compile the
code so that it can be used in AWS Lambda.
1. Launch the context menu for the file pom.xml, select “Maven,” and then “Add
Dependency.”
2. The following should be typed in the window for “Add dependency”:
Version: 1.1.0
2. In the window for “New Java Class,” type the values given below:
Package: example
Name: Hello
In the Package Explorer, launch the context menu for your project. Click on “Run
AS” and then choose “Maven Build….” In the Window for “Edit Configuration,”
identify the “Goals” box, and then type the package in it.
The plugin will take the artifacts which are produced by the package goal, and a
standalone .jar file will be created having the compiled customer code, and the
dependencies which have been resolved from the file pom.xml.
1. Launch the context menu for the file pom.xml, select “Maven,” and then “Add
plugin.”
2. In the window for “Add Plugin,” type the values given below:
Version: 2.3
3. You can then build the project again. During this time, the jar will be created
as before, and then make use of the plugin “aven-shade-plugin” so as to pull
the dependencies for the purpose of making a standalone .jar file.
Launch the context menu for your project, select “Run As” followed by “Maven build…”
In the windows for “Edit Configurations,” identify the “Goals” box and then in it, type the
text “shade:shade.”
Select “Run.”
The resulting .jar file can be found in the subdirectory “/target.” You can then open the
context menu for this subdirectory, select “Show In,” and then select “System Explorer.”
You have now learned how to create a .jar deployment package for your project. In this
section, we will explore how to create a .zip deployment package from your java project.
This can be done by use of any building and packaging tool. However, regardless of the
tool which you use to do this, the .zip file that results must have the structure given below:
All the compiled resource files and class files should be located at the root level.
All the jar files required for running the code should be located at the /lib directory.
We need to end with a project directory (project-dir) with the structure given below:
project-dir/build.gradle
project-dir/src/main/java/
Your code will be located in the /java folder. If the name of your package is “sample,” and
the java class “Hello.java” is located in it, then you will have the following:
project-dir/src/main/java/sample/Hello.java
Once the project has been built, you will find the final .zip file in the subdirectory
“project-dir/build/distributions.”
2. In the above directory, create the file “build.gradle” and then add the following
repositories {
mavenCentral()
dependencies {
compile (
‘com.amazonaws:aws-lambda-java-core:1.1.0’,
‘com.amazonaws:aws-lambda-java-events:1.1.0’
}
task buildZip(type: Zip) {
from compileJava
from processResources
into(‘lib’) {
from configurations.runtime
build.dependsOn buildZip
project-dir/src/main/java/
4. Under the subdirectory /java, add your java files and the folder structure, if you
have any. If you have a java package by the name “sample” and a source code named
“Hello.java,” then the structure of the directory should appear as shown below:
project-dir/src/main/java/sample/Hello.java
5. Run the Gradle command given below so as to package and build the project and
6. You can then verify the “project-dir.zip” file which results in the subdirectory
“project-dir/build/distributions.”
7. At this point, the .zip file, which is the deployment package, can be uploaded to the
AWS Lambda for creation of a Lambda function and then you can invoke it by use of a
It is not a must for you to use the Maven Central Repository. You can choose to add all
your dependencies into the project folder. In such a case, the project folder should have the
project-dir/build.gradle
So, if our Java code is having the package “sample,” and the class “Hello.java,” the code
will be located in the subdirectory given below:
project-dir/src/main/java/sample/Hello.java
dependencies {
from compileJava
from processResources
into(‘lib’) {
from configurations.runtime
build.dependsOn buildZip
Remember that the dependencies we have are specifying the “fileTree” which will identify
the “project-dir/jars” as our subdirectory which will have all the jars which are required.
It is now time for the package to be built. The following Gradle command can be executed
for this purpose:
That is it.
Python
You have now learned how to create a deployment in Node.js and as well as in Java. Now,
For the Lambda function to be created, we should first create a deployment package for
the Lambda function, which should be a .zip file having all the code and dependencies.
You can choose to create your deployment package yourself, or write the code directly in
the console, whereby the console will create the deployment package on your behalf and
We will have to use “pip” for the purpose of installing the dependencies/libraries. After
created. The source files are the ones with a .py extension.
3. Use pip to install any required libraries. Also, these libraries should be installed at
the root level of the directory you have just created. The following command can be
Consider the command given below, which can be used for installation of the library
4. The contents of the directory “project-dir” can then be zipped, and this will give
you the deployment package. However, it is good for you to know that you should zip the
contents of the directory, but not the directory itself. The contents contained in the zip file
will be available as our current working directory in the Lambda function.
Sometimes, you might have used the virtualenv tool for creation of a Python environment.
1. Use your Python code to create a .zip file which you need to upload to
directory to the .zip file. If you are using the Windows OS, the
%VIRTUAL_ENV%\Lib\site-packages
That is what we can say as far as this is concerned. In some cases, you
will fail to get the directory. If this happens with you, just find it in the
directory “dist-packages.”.
Suppose that you had readily installed the package “requests” in the environment which
has been activated, assuming that these will be used in your code. The packages can be
installed as follows:
The following steps can then be followed for the purpose of creation of the deployment
package:
1. From the Python code which is to be uploaded to the AWS Lambda, create a .zip
file.
2. Add the libraries from which you had previously activated a virtual environment to
your .zip file. This means that the contents of the following directory should be added to
the .zip file:
%VIRTUAL_ENV%\Lib\site-packages
$VIRTUAL_ENV/lib/python2.7/site-packages
Chapter 2- AWS Lambda Limits
amount of memory allocated, and each function has to be allocated a fixed amount of code
Sometimes, you might have a concurrent execution, and in such a case, you can request
an increase for the limit. This can be done by following the steps given below:
Begin by opening the page for AWS Support Center, log in, and then click on
“Create Case.”
Select “Lambda” from “Limit Type,” and then fill in any necessary fields provided
in the form, and then select the best method for contact at the bottom of your page.
Sometimes, the AWS might automatically raise the limit for your concurrent execution so
that the function can be enabled to meet the event rate which is incoming, such as when
In case the function that you define exceeds the specified limit for a particular resource,
then it usually fails with an exception of type “exceeded limits.” The limit is fixed, and
changing it at this time will be impossible. Consider a situation in which you receive an
storage limit exceeded.” For you to solve that problem, you will have to make a reduction
on your code storage. This can be done by following the steps given below:
Reduce the size of the code for the functions which you do not need to remove. The size
of code for a Lambda function can be found by use of the AWS Lambda console, which is
the command line interface for AWS, or the AWS SDKs.
Chapter 3- Event Sources in AWS
Lambda
Invocation of AWS Lambda functions is done by the other AWS services which are
responsible for publishing events. A Lambda function is invoked after the configuration of
the supported service as an event source.
“StackId”: myid,
“ResponseURL”: “http://pre-signed-S3-url-for-response”,
“ResourceProperties”: {
“StackName”: “name”,
“List”: [
“1”,
“2”,
“3”
},
“RequestType”: “Create”,
“ResourceType”: “Custom::TestResource”,
“LogicalResourceId”: “TestResource”
}
SES Email Receiving
“Records”: [
“eventVersion”: “1.0”,
“ses”: {
“mail”: {
“commonHeaders”: {
“from”: [
],
“to”: [
],
“returnPath”: “[email protected]”,
“messageId”: “<0724365475sample.com>”,
},
“source”: “[email protected]”,
“timestamp”: “1990-02-03T00:00:00.000Z”,
“destination”: [
],
“headers”: [
“name”: “Return-Path”,
“value”: “<[email protected]>”
},
“name”: “Received”,
“value”: “(UTC)”
},
“name”: “DKIM-Signature”,
},
“name”: “MIME-Version”,
“value”: “1.0”
},
{
“name”: “From”,
},
“name”: “Date”,
},
“name”: “Message-ID”,
“value”: “<0724356745sample.com>”
},
“name”: “Subject”,
},
“name”: “To”,
value”: “[email protected]”
},
{
“name”: “Content-Type”,
],
“headersTruncated”: false,
“messageId”: “12vrnil0j7ic28thm7dfhrd6v0clambdw3nbp0i7x”
},
“receipt”: {
“recipients”: [
],
“timestamp”: “1990-02-02T00:00:00.000Z”,
“spamVerdict”: {
“status”: “PASS”
},
“dkimVerdict”: {
“status”: “PASS”
},
“processingTimeMillis”: 574,
“action”: {
“type”: “Lambda”,
“invocationType”: “Event”,
“functionArn”: arn
},
“spfVerdict”: {
“status”: “PASS”
},
“virusVerdict”: {
“status”: “PASS”
},
“eventSource”: “aws:ses”
}]}
Scheduled event
“account”: “0732453474754”,
“region”: “us-east-1”,
“detail”: {},
“source”: “aws.events”,
“time”: “1990-02-02T00:00:00Z”,
“id”: “gdc73f7d-aeb9-22d3-9d6a-925b769b0d9d”,
“resources”: [
“arn:aws:events:us-east-1: 0732453474754:rule/my-schedule”
CloudWatch Logs
{
“awslogs”: {
“data”:
}
SNS
“Records”: [
“EventVersion”: “1.0”,
“EventSubscriptionArn”: eventsub,
“EventSource”: “aws:sns”,
“Sns”: {
“SignatureVersion”: “1”,
“Timestamp”: “1990-02-02T00:00:00.000Z”,
“Signature”: “SAMPLE”,
“SigningCertUrl”: “SAMPLE”,
“MessageId”: “65df01b5-ee74-5cb3-9987-4c331d41eb4e”,
“MessageAttributes”: {
“Test”: {
“Type”: “String”,
“Value”: “TestString”
},
“TestBinary”: {
“Type”: “Binary”,
“Value”: “TestBinary”
},
“Type”: “Notification”,
“UnsubscribeUrl”: “SAMPLE”,
“TopicArn”: arn,
“Subject”: “TestInvoke”
}
DynamoDB Update
“Records”: [
“eventID”: “1”,
“eventVersion”: “1.0”,
“dynamodb”: {
“Keys”: {
“Id”: {
“N”: “101”
},
“NewImage”: {
“Message”: {
},
“Id”: {
“N”: “101”
}
},
“StreamViewType”: “NEW_AND_OLD_IMAGES”,
“SequenceNumber”: “234”,
“SizeBytes”: 26
},
“awsRegion”: “asia-west-2”,
“eventName”: “INSERT”,
“eventSourceARN”: arn,
“eventSource”: “aws:dynamodb”
},
“eventID”: “2”,
“eventVersion”: “1.0”,
“dynamodb”: {
“OldImage”: {
“Message”: {
},
“Id”: {
“N”: “234”
}
},
“SequenceNumber”: “444”,
“Keys”: {
“Id”: {
“N”: “234”
},
“SizeBytes”: 59,
“NewImage”: {
“Message”: {
},
“Id”: {
“N”: “234”
},
“StreamViewType”: “NEW_AND_OLD_IMAGES”
},
“awsRegion”: “asia-west-2”,
“eventName”: “MODIFY”,
“eventSourceARN”: arn,
“eventSource”: “aws:dynamodb”
},
“eventID”: “3”,
“eventVersion”: “1.0”,
“dynamodb”: {
“Keys”: {
“Id”: {
“N”: “234”
},
“SizeBytes”: 38,
“SequenceNumber”: “555”,
“OldImage”: {
“Message”: {
},
“Id”: {
“N”: “234”
},
“StreamViewType”: “NEW_AND_OLD_IMAGES”
},
“awsRegion”: “asia-west-2”,
“eventName”: “REMOVE”,
“eventSourceARN”: arn,
“eventSource”: “aws:dynamodb”
}
Cognito Sync Trigger
“datasetName”: “NameOfDataSet”,
“eventType”: “SyncTrigger”,
“region”: “asia-east-1”,
“identityId”: “identityId”,
“datasetRecords”: {
“SampleKey2”: {
“newValue”: “value2”,
“oldValue”: “previousValue2”,
“op”: “replace”
},
“SampleKey1”: {
“newValue”: “Value1”,
“oldValue”: “previousValue1”,
“op”: “replace”
},
“identityPoolId”: “identityPoolId”,
“version”: 2
Kinesis
“Records”: [
“eventID”: “shardId-
000000000000:7845634547490985018280897469731445821800846756442025487”,
“eventVersion”: “1.0”,
“kinesis”: {
“partitionKey”: “partitionKey-3”,
“data”: “SGVjkG8sIHRoaXMhbXMgYSB0ZXK0IDExMj4=”,
“kinesisSchemaVersion”: “1.0”,
“sequenceNumber”:
“7845634547490985018280897469731445821800846756442025487”
},
“invokeIdentityArn”: arn,
“eventName”: “aws:kinesis:record”,
“eventSourceARN”: arn,
“eventSource”: “aws:kinesis”,
“awsRegion”: “asia-east-1”
}
}
S3 Put
“Records”: [
“eventVersion”: “2.0”,
“eventTime”: “1990-02-02T00:00:00.000Z”,
“requestParameters”: {
“sourceIPAddress”: “127.0.0.1”
},
“s3”: {
“configurationId”: “ConfigRule”,
“object”: {
“eTag”: “0123456789abwxy0123456789abcjkl”,
“sequencer”: “0A1B2C3D4E5F678901”,
“key”: “Picture.jpg”,
“size”: 1024
},
“bucket”: {
“arn”: arn,
“name”: “sourcebucket”,
“ownerIdentity”: {
“principalId”: “SAMPLE”
},
“s3SchemaVersion”: “1.0”
},
“responseElements”: {
“x-amz-id-2”:
“SAMPLE256/8456abcdefghijklambdaisgood/mnoutystuvwxyzABCDEFGH”,
“x-amz-request-id”: “SAMPLE123456789”
},
“awsRegion”: “asia-east-1”,
“eventName”: “ObjectCreated:Put”,
“userIdentity”: {
“principalId”: “SAMPLE”
},
“eventSource”: “aws:s3”
}
S3 Delete
“Records”: [
“eventVersion”: “2.0”,
“eventTime”: “1990-02-02T00:00:00.000Z”,
“requestParameters”: {
“sourceIPAddress”: “127.0.0.1”
},
“s3”: {
“configurationId”: “ConfigRule”,
“object”: {
“sequencer”: “0J1C2C3D4K5F677501”,
“key”: “Picture.jpg”
},
“bucket”: {
“arn”: arn,
“name”: “sourcebucket”,
“ownerIdentity”: {
“principalId”: “SAMPLE”
},
“s3SchemaVersion”: “1.0”
},
“responseElements”: {
“x-amz-id-2”: ”
SAMPLE256/8456abcdefghijklambdaisgood/mnoutystuvwxyzABCDEFGH “,
“x-amz-request-id”: “SAMPLE123456789”
},
“awsRegion”: “asia-east-1”,
“eventName”: “ObjectRemoved:Delete”,
“userIdentity”: {
“principalId”: “SAMPLE”
},
“eventSource”: “aws:s3”
}
Mobile Backend
“operation”: “echo”,
}
Chapter 4- Managing Resource Access
Permissions
Each resource in AWS is owned by an account, and permission policies governs the
process of creating and accessing the resource. The administrator for an account is
allowed to attach the permission policies to the IAM identities, and some of the services
The primary resources in AWS Lambda include event source mapping and the Lambda
function. An event source mapping is created in the AWS Lambda pull model so as to
associate a Lambda function with an event source.
Other types of resources supported in AWS Lambda include “alias” and “version.”
However, it is good for you to note that the version and alias can be created in the context
The resource owner should be the AWS account in which the resource was created. This
means that the resource owner should be the AWS account of our principal entity, which
can either be a root account, IAM user, or IAM role, and it is tasked with the responsibility
of authenticating a request and creating a particular resource. This works as it has been
illustrated below:
If you make use of the root credentials of your root account so as to create the
resource, then that AWS account will become the owner of that resource. This
translates to the fact that a resource belongs to the account in which it was created.
After the creation of an IAM user in the AWS account, and then you grant
permissions to a user to create a Lambda function, the user will then be granted
permission to create the Lambda function. However, the user belongs to your AWS
account, meaning that you will own that Lambda function.
After creation of an IAM role in your AWS account with permissions which can
allow you to create a Lambda function, anyone with the ability to assume the role
will be in a position to create the Lambda function. The AWS account, which owns
based policies” while the policies which have been attached to a particular resource are
Policies can be attached to IAM identities. The following examples best demonstrate this:
for a particular user to grant permissions for the user so that he or she can be in a
Attach a permissions policy to a role (that is, grant cross-account permissions)- with
an IAM role, one can choose to attach an identity-based permissions policy to it,
which is a good way for granting cross-account permissions.
Suppose that you have two accounts, A and B, and you need to grant a cross-account
permission, you can do it as follows:
1. The administrator for account A should create an IAM role, and then attach a
permissions policy to the role which grants permissions on the resources in account
A.
2. The administrator for account A then attaches a trust policy to this role, and then
makes sure that it identifies the account B as the principal who will assume the role.
so as to assume the role to any of the users in account B. When this has been done,
You can also choose to use an AWS service principal as the principal in the trust
policy if you are in need of granting an AWS service permission for the purpose of
Consider the permission example given below which shows how the permission
character.
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “ListExistingFunctions”,
“Effect”: “Allow”,
“Action”: [
“lambda:ListFunctions”
],
“Resource”: “*”
}
Resource-based Policies
Lambda function forms the primary resource in Lambda, and these types of resources are
usually referred to as “Lambda function policies.” A Lambda function policy can be used
policies having IAM roles. An example is that the Amazon S3 can be granted permissions
The example given below shows a Lambda policy having a single statement.
“Policy”:{
“Version”:“2016-05-08”,
“Statement”:[
“Effect”:“Allow”,
“Principal”:{
“Service”:“s3.amazonaws.com”
},
“Action”:“lambda:InvokeFunction”,
“Resource”:“arn:aws:lambda:region:account-id:function:Hellothere”,
“Sid”:“86bagh70-6a1f-31a8-a8ab-8bb9bc777545”,
“Condition”:{
“StringEquals”:{
“AWS:SourceAccount”:”account-id”
},
“ArnLike”:{
“AWS:SourceArn”:“arn:aws:s3:::SampleBucket”
}
AWS Lambda Access with Identity-Based Policies
“Version”: “2016-08-05”,
“Statement”: [
“Sid”: “CreateFunctionPermissions”,
“Effect”: “Allow”,
“Action”: [
“lambda:CreateFunction”
],
“Resource”: “*”
},
“Sid”: “PermissionToPassAnyRole”,
“Effect”: “Allow”,
“Action”: [
“iam:PassRole”
],
“Resource”: “arn:aws:iam::account-id:role/*”
The first statement has been used for granting permission to the
does not support this permission for the action at the resource-level.
The second statement has been used for granting permission to the IAM action on
the IAM role. The * (wildcard character) used at the end of above value for the
resource is a symbol for the action “iam:PassRole” on any of the IAM role. If you
are in need of limiting the permission to a specified role, the wildcard character
should be replaced with a specific name for a role.
The following steps can guide you when creating an IAM role in AWS Lambda:
Create a user for IAM
You should begin by creating an IAM user and then add him to a group with the
administrative permissions, and then grant this user administrative privileges. After
that, you will be in a position to use the credentials of that user and a special url so
The user can allow the user to list the Lambda Functions
The IAM user for your account must possess a permission for the action
“lambda:ListFunctions” so that the user can see anything in the console. After
granting the permissions, the console will show the list of Lambda functions in
your AWS account which have been created in the specific AWS region in which
the user belongs.
“Version”: “2016-05-08”,
“Statement”: [
{
“Sid”: “ListExistingFunctions”,
“Effect”: “Allow”,
“Action”: [
“lambda:ListFunctions”
],
“Resource”: “*”
We now need to enable the user to be able to view the details of the Lambda functions.
Some of the information regarding the Lambda function include versions, aliases, and
other information regarding con figurations. However, for this to be the case, the user has
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “DisplayFunctionDetailsPermissions”,
“Effect”: “Allow”,
“Action”: [
“lambda:VersionsByFunctionList”,
“lambda:AliasesList”,
“lambda:GetFunction”,
“lambda:GetFunctionConfiguration”,
“lambda:EventSourceMappingsList”,
“lambda:GetPolicy”
],
“Resource”: “*”
We can then allow the user to invoke a Lambda function, and we need this to be done
manually. For this to be the case, the user has to be granted permission on the action
“lambda:InvokeFunction.” This can be done as shown below:
“Version”: “2016-05-06”,
“Statement”: [
“Sid”: “InvokePermission”,
“Effect”: “Allow”,
“Action”: [
“lambda:InvokeFunction”
],
“Resource”: “*”
A user can also be allowed to monitor a Lambda function and then view the CloudWatch
Logs. After invocation of a Lambda function by a user, the AWS Lambda executes the
function and then returns a result. For the user to be able to monitor the Lambda function,
they need to be granted extra permissions.
For us to enable the user to be able to monitor the CloudWatch metrics for our Lambda
function on the monitoring tab for our console, or on the grid view for our console home
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “CloudWatchPermission”,
“Effect”: “Allow”,
“Action”: [
“cloudwatch:GetMetricStatistics”
],
“Resource”: “*”
If you need your user to be in a position to click the links to the CloudWatch Logs in AWS
Lambda and then view the log output in the CloudWatch logs, the following permissions
have to be granted:
{
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “CloudWatchLogsPerms”,
“Effect”: “Allow”,
“Action”: [
“cloudwatchlog:DescribeLogGroups”,
“cloudwatchlog:DescribeLogStreams”,
“cloudwatchlog:GetLogEvents”
],
“Resource”: “arn:aws:logs:region:account-id:log-group:/aws/lambda/*”
It is also good for you to allow the user to be in a position to create a Lambda Function.
However, for this to be the case, the following permissions have to be granted. The
permissions which are related to IAM actions are needed since after creation of the
Lambda function by the user, the user has to select the IAM execution role, and this is
what the AWS Lambda will assume so as to execute your Lambda function. This is shown
below:
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “ExistingRolesAndPoliciesList”,
“Effect”: “Allow”,
“Action”: [
“iam:RolePoliciesList”,
“iam:RolesList”
],
“Resource”: “*”
},
“Sid”: “CreateFunctionPermissions”,
“Effect”: “Allow”,
“Action”: [
“lambda:CreateFunction”
],
“Resource”: “*”
},
“Sid”: “PermissionToPassRole”,
“Effect”: “Allow”,
“Action”: [
“iam:PassRole”
],
“Resource”: “arn:aws:iam::account-id:role/*”
For the user to be in a position to create an IAM role once the user has created a Lambda
function, the user has to be granted permission on the action “iam:PutRolePolicy.” The
“Sid”: “CreateRole”,
“Effect”: “Allow”,
“Action”: [
“iam:PutRolePolicy”
],
“Resource”: “arn:aws:iam::account-id:role/*”
}
Chapter 5- Permissions Required for
Using AWS Lambda Console
For one to enjoy the benefits offered by the AWS Lambda console, they must be granted
some extra permissions rather than the API-specific ones, and this is determined by what
After configuring an API endpoint in the AWS Lambda console, the console always has to
make several calls to the Gateway API. For these calls to work, the user has to be granted
permission for the action “apigateway:*.” The example given below best demonstrates
this:
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “ApiGatewayPermissions”,
“Effect”: “Allow”,
“Action”: [
“apigateway:*”
],
“Resource”: “*”
},
{
“Sid”: “AddPermissionToFunctionPolicy”,
“Effect”: “Allow”,
“Action”: [
“lambda:AddPermission”,
“lambda:RemovePermission”,
“lambda:GetPolicy”
],
“Resource”: “arn:aws:lambda:region:account-id:function:*”
},
“Sid”: “ListEventSourcePerm”,
“Effect”: “Allow”,
“Action”: [
“lambda:ListEventSourceMappings”
],
“Resource”: “*”
}
Amazon CloudWatch Events
It is possible for you to schedule the time of execution of your Lambda function. After
selecting a CloudWatch Events rule, AWS Lambda will create a new target in the
CloudWatch which invokes the Lambda function. For the target creation to function, then
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “EventPermissions”,
“Effect”: “Allow”,
“Action”: [
“events:PutRule”,
“events:ListRules”,
“events:ListRuleNamesByTarget”,
“events:PutTargets”,
“events:RemoveTargets”,
“events:DescribeRule”,
“events:TestEventPattern”,
“events:ListTargetsByRule”,
“events:DeleteRule”
],
“Resource”: “arn:aws:events:region:account-id:*”
},
“Sid”: “PermissionToFunctionPolicy”,
“Effect”: “Allow”,
“Action”: [
“lambda:AddPermission”,
“lambda:RemovePermission”,
“lambda:GetPolicy”
],
“Resource”: “arn:aws:lambda:region:account-id:function:*”
}
Amazon CloudWatch Logs
It is possible for you to invoke a Lambda function even when you have published events
for Amazon CloudWatch Logs service. Once this service has been configured as an event
source, the console will list the log groups contained in your AWS account. For the listing
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “CloudWatchLogsPermissions”,
“Effect”: “Allow”,
“Action”: [
“logs:FilterLogEvents”,
“logs:DescribeLogGroups”,
“logs:PutSubscriptionFilter”,
“logs:DescribeSubscriptionFilters”,
“logs:DeleteSubscriptionFilter”,
“logs:TestMetricFilter”
],
“Resource”: “arn:aws:logs:region:account-id:*”
},
“Sid”: “PermissionToFunctionPolicy”,
“Effect”: “Allow”,
“Action”: [
“lambda:AddPermission”,
“lambda:RemovePermission”,
“lambda:GetPolicy”
],
“Resource”: “arn:aws:lambda:region:account-id:function:*”
},
“Sid”: “EventSourceMappingsPermissions”,
“Effect”: “Allow”,
“Action”: [
“lambda:ListEventSourceMappings”
],
“Resource”: “*”
}
Amazon Cognito
The console is responsible for listing the identity pools contained in your account. Once
you have selected the pool, it can be configured so that the event source type can be of the
type “Cognito sync trigger.” For this to be done, the following permissions can be added:
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “CognitoPermissions1”,
“Effect”: “Allow”,
“Action”: [
“cognito-identity:ListIdentityPools”
],
“Resource”: [
“arn:aws:cognito-identity:region:account-id:*”
},
{
“Sid”: “CognitoPermissions2”,
“Effect”: “Allow”,
“Action”: [
“cognito-sync:GetCognitoEvents”,
“cognito-sync:SetCognitoEvents”
],
“Resource”: [
“arn:aws:cognito-sync:region:account-id:*”
},
“Sid”: “PermissionToFunctionPolicy”,
“Effect”: “Allow”,
“Action”: [
“lambda:AddPermission”,
“lambda:RemovePermission”,
“lambda:GetPolicy”
],
“Resource”: “arn:aws:lambda:region:account-id:function:*”
},
{
“Sid”: “EventSourcePerms”,
“Effect”: “Allow”,
“Action”: [
“lambda:ListEventSourceMappings”
],
“Resource”: “*”
}
Amazon DynamoDB
The console is good for listing the tables contained in your account. Once you have
selected a table, the console will check to see if the DynamoDB stream is in existence for
that specific table. If this is not found, the stream will be created. For the user to be
allowed to create a DynamoDB stream to be an event source for the lambda function, the
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “DynamoDBpermissions1”,
“Effect”: “Allow”,
“Action”: [
“dynamodb:DescribeStream”,
“dynamodb:DescribeTable”,
“dynamodb:UpdateTable”
],
“Resource”: “arn:aws:dynamodb:region:account-id:table/*”
},
“Sid”: “DynamoDBpermissions2”,
“Effect”: “Allow”,
“Action”: [
“dynamodb:ListStreams”,
“dynamodb:ListTables”
],
“Resource”: “*”
},
“Sid”: “LambdaGetPolicyPerm”,
“Effect”: “Allow”,
“Action”: [
“lambda:GetPolicy”
],
“Resource”: “arn:aws:lambda:region:account-id:function:*”
},
“Sid”: “LambdaEventSourcePerms”,
“Effect”: “Allow”,
“Action”: [
“lambda:CreateEventSourceMapping”,
“lambda:DeleteEventSourceMapping”,
“lambda:GetEventSourceMapping”,
“lambda:ListEventSourceMappings”,
“lambda:UpdateEventSourceMapping”
],
“Resource”: “*”
}
Amazon Kinesis Streams
The console will list all of the Amazon kinesis streams found in your account. Once a
stream has been selected, the console will create event source mappings in your AWS
Lambda. The following permissions have to be added so that this can work:
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “PermForDescStream”,
“Effect”: “Allow”,
“Action”: [
“kinesis:DescribeStream”
],
“Resource”: “arn:aws:kinesis:region:account-id:stream/*”
},
“Sid”: “PermForListStreams”,
“Effect”: “Allow”,
“Action”: [
“kinesis:ListStreams”
],
“Resource”: “*”
},
“Sid”: “PermForGetFunctionPolicy”,
“Effect”: “Allow”,
“Action”: [
“lambda:GetPolicy”
],
“Resource”: “arn:aws:lambda:region:account-id:function:*”
},
“Sid”: “LambdaEventSourcePermissions”,
“Effect”: “Allow”,
“Action”: [
“lambda:CreateEventSourceMapping”,
“lambda:DeleteEventSourceMapping”,
“lambda:GetEventSourceMapping”,
“lambda:ListEventSourceMappings”,
“lambda:UpdateEventSourceMapping”
],
“Resource”: “*”
}
Amazon S3
The console will work to prepopulate the list of the buckets contained in the AWS account
and then find the bucket location for each of the buckets. Once Amazon S3 has been
configured as an event source, the console will update the configuration of the bucket
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “S3Permissions”,
“Effect”: “Allow”,
“Action”: [
“s3:GetBucketLocation”,
“s3:GetBucketNotification”,
“s3:PutBucketNotification”,
“s3:ListAllMyBuckets”
],
“Resource”: “arn:aws:s3:::*”
},
“Sid”: “PermissionToFunctionPolicy”,
“Effect”: “Allow”,
“Action”: [
“lambda:AddPermission”,
“lambda:RemovePermission”
],
“Resource”: “arn:aws:lambda:region:account-id:function:*”
}
Amazon SNS
The console is responsible for listing Amazon Simple Notification Service (SNS) topics in
the account. Once you have selected a topic, the AWS Lambda will subscribe the Lambda
function to the Amazon SNS topic. For this to work, the following permissions have to be
added:
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “SNSPermissions”,
“Effect”: “Allow”,
“Action”: [
“sns:ListSubscriptions”,
“sns:ListSubscriptionsByTopic”,
“sns:ListTopics”,
“sns:Subscribe”,
“sns:Unsubscribe”
],
“Resource”: “arn:aws:sns:region:account-id:*”
},
“Sid”: “PermissionToFunctionPolicy”,
“Effect”: “Allow”,
“Action”: [
“lambda:AddPermission”,
“lambda:RemovePermission”,
“lambda:GetPolicy”
],
“Resource”: “arn:aws:lambda:region:account-id:function:*”
},
“Sid”: “LambdaESMappingsPermissions”,
“Effect”: “Allow”,
“Action”: [
“lambda:ListEventSourceMappings”
],
“Resource”: “*”
}
]
}
AWS IoT
The console is responsible for listing the available AWS IoT. Once you have selected a
particular rule, the console will populate the rest of your information associated with the
rule in the user interface. If the existing rule is selected, the console will update it with
information so that the events will be send to the AWS Lambda. The following
“Version”: “2016-05-08”,
“Statement”: [
“Sid”: “IoTpermissions”,
“Effect”: “Allow”,
“Action”: [
“iot:GetTopicRule”,
“iot:CreateTopicRule”,
“iot:ReplaceTopicRule”
],
“Resource”: “arn:aws:iot:region:account-id:*”
},
{
“Sid”: “IoTTopicRulePermissions”,
“Effect”: “Allow”,
“Action”: [
“iot:ListTopicRules”
],
“Resource”: “*”
},
“Sid”: “LambdaPermissions”,
“Effect”: “Allow”,
“Action”: [
“lambda:AddPermission”,
“lambda:RemovePermission”,
“lambda:GetPolicy”
],
“Resource”: “arn:aws:lambda:region:account-id:function:*”
}
Chapter 6- Resource-Based Policies in
AWS Lambda
A Lambda function is just one of the available resources in AWS Lambda. Permissions
can be added to the policy associated with a particular Lambda function. Permission
policies which have been attached to Lambda functions are usually known as “resource-
based policies.” Lambda function policies are used for the purpose of managing
permissions for Lambda function invocation. Lambda function policies are used for
The console will not directly support the modification of permissions directly in a function
policy. One must use either AWS SDKs or AWS CLI. In this chapter, we are going to
For us to configure the Amazon S3 to be in a position to invoke the Lambda function, the
permissions.
If you need to f ensure that the event is to be generated from a specific bucket owned by a
The bucket ARN should be specified as the value for “source-arn” for restricting
Consider the AWS CLI command given below, which adds permission to the Lambda
function policy named “hellothere,” and it will grant permission to the Amazon S3 to
—region asia-west-2 \
—function-name hellothere \
—statement-id 1 \
—principal s3.amazonaws.com \
—action lambda:InvokeFunction \
—source-arn arn:aws:s3:::samplebucket \
—source-account 111111111111 \
—profile adminuser
In the above example, it has been assumed that the “adminuser” is the one adding the
permission. This means that the parameter “—profile” has been used for specifying the
profile of the adminuser.
In response to the above code, the AWS Lambda will return some JSON code. The value
“statement” is just a string version in JSON for the statement which has been added to our
“Statement”: “{\“Condition\”:{\“StringEquals\”:
{\“AWS:SourceAccount\”:\“12222287611\”},
\“ArnLike\”:{\“AWS:SourceArn\”:\“arn:aws:s3:::samplebucket\”}},
\“Action\”:[\“lambda:InvokeFunction\”],
\“Resource\”:\“arn:aws:lambda:asia-west-2: 12222287611:function:hellothere\”,
\“Effect\”:\“Allow\”,\“Principal\”:{\“Service\”:\“s3.amazonaws.com\”},
\“Sid\”:\“1\”}”
}
Permitting Amazon API Gateway to call a Lambda Function
For the Amazon APAI gateway to be allowed to be able to invoke a Lambda function, we
granting permissions.
The AWS CLI command given below will add permission to the Lambda function policy
named “hellothere” so as to grant permissions to the API gateway to be able to invoke the
—region asia-west-2 \
—function-name hellothere \
—statement-id 5 \
—principal apigateway.amazonaws.com \
—action lambda:InvokeFunction \
—source-arn arn:aws:execute-api:region:account-id:api-id/stage/method/resource-
path” \
—profile adminuser
The AWS Lambda will respond to the above code with a JSON code. The value
“statement” is just a JSON string version for the statement which had been added to our
“Statement”: “{\“Condition\”:{\“ArnLike\”:
{\“AWS:SourceArn\”:\“arn:aws:apigateway:asia-east-1::my-api-
id:/test/walkthrough/dogs\”}},
\“Action\”:[\“lambda:InvokeFunction\”],
\“Resource\”:\“arn:aws:lambda:asia-west-2:account-id:function:helloworld\”,
\“Effect\”:\“Allow\”,
\“Principal\”:{\“Service\”:\“apigateway.amazonaws.com\”},
\“Sid\”:\“5\”}”
A user application, which has been created by another AWS account can also be allowed
to invoke a Lambda function, with a cross-account scenario. However, for this to be the
case, the account ID for the AWS account should be specified as the “principal.” The
AWS CLI command given below best demonstrates this:
—region asia-west-2 \
—function-name hellothere \
—statement-id 3 \
—principal 12222287611 \
—action lambda:InvokeFunction \
—profile adminuser
The AWS Lambda will respond by giving the following code. The value “statement” is the
JSON string version of statement which has been added to our Lambda function policy.
“Statement”: “{\“Action\”:[\“lambda:InvokeFunction\”],
\“Resource\”:\“arn:aws:lambda:asia-west-2:account-id:function:hellothere\”,
\“Effect\”:\“Allow\”,
\“Principal\”:{\“AWS\”:\“account-id\”},
\“Sid\”:\“3\”}”
}
Retrieving a Lambda Function Policy
For you to retrieve the Lambda function policy, you should use the command “get-policy.”
—function-name sample \
—profile adminuser
This is possible, and it is done using the command “remove-policy.” However, you have to
specify the name of the function and the statement ID. This is best demonstrated in the
example given below:
—statement-id 1 \
—profile adminuser
Chapter 7- Actions
Let us discuss some of the actions which are supported in AWS Lambda.
CreateAlias
This is used for creating an alias to the function which points to a specified version of a
Lambda function. Alias names are always unique to the given function. For this to work,
the permission for the action “lambda:CreateAlias” has to be added.
Content-type: application/json
{
“Description”: “string“,
“FunctionVersion”: “string“,
“Name”: “string”
FunctionName- this represents the name of your Lambda function for which an alias is to
be created. It should have a minimum length of 1, and a maximum length of 140.
The request should have the following data specified in JSON format:
Description- this should be of the type string and describes the alias to be created.
FunctionVersion- this is the version of the function for which you are to create an
alias.
The response syntax should be as follows:
HTTP/1.1 201
Content-type: application/json
“AliasArn”: “string“,
“Description”: “string“,
“FunctionVersion”: “string“,
“Name”: “string”
}
DeleteAlias
This is used for deleting the Lambda function alias that you specify. For this to happen,
FunctionName- this is the name of the function for which an alias is to be created.
Name- this is the name for the alias which you are intending to delete.
You should then get the following as the response syntax:
HTTP/1.1 204
Conclusion
We have come to the conclusion of this guide. My hope is that you have learned most of
the properties associated with AWS Lambda. You should now be aware of how to create a
deployment package. We have discussed how to do this in Node, Java, and Python. In
Node, you can choose to do it by writing the code on your own or by using the console.
Note that the deployment package can be created in either a .jar or .zip form, so you can
In Java, one can choose to use Maven for the purpose of creating the deployment
package. Again, the package can either be in a .zip or .jar form. You can choose to use the
Maven package alone, in which case you will first begin by installing it. Also, you can
choose to combine the Maven with Eclipse IDE for Java, in which case you will have to
install the Maven plugin for the Eclipse. For you to be able to carry out certain tasks in
AWS Lambda, there are some permissions and actions which have to be permitted in a
particular account, and this has been discussed.
Limits to the usage of resources are alterable, and it is good for you to know how to do
this so as to take care of concurrent executions.