AWS || Lambda Func || CD-steps || CI-GitHub ACTIONS


To set up continuous integration (CI) using GitHub Actions, follow these steps:

  • Create a GitHub Repository: Start by creating a repository on GitHub to host your source code. If you already have a repository, you can skip this step.
  • Define Workflow: Inside your GitHub repository, create a new directory named .github/workflows. In this directory, create a YAML file (e.g., ci.yml) to define your CI workflow.
  • Configure Workflow: Open the YAML file and define the workflow using the GitHub Actions syntax. Specify the trigger event, such as pushes to specific branches or pull requests. You can also configure other event types like schedule or repository dispatch.
  • Specify Jobs and Steps: Define one or more jobs within the workflow. Each job represents a set of steps that will be executed. For example, you can have a job for building your application, running tests, and generating code coverage reports.
  • Set up Environment: Specify the environment for your CI workflow. This includes the operating system, programming language, and any required dependencies. GitHub Actions provides a variety of pre-configured environments, or you can create a custom environment using Docker containers.
  • Define Steps: Within each job, define the steps that need to be executed. These can include cloning the repository, installing dependencies, running commands or scripts, executing tests, and generating artifacts.
  • Configure Caching: To optimize the CI workflow, consider caching dependencies or build artifacts between workflow runs. This can significantly speed up subsequent executions by reusing the cached content.
  • Add Optional Features: GitHub Actions provides additional features such as parallelism, matrix builds, and environment variables. Explore these features to enhance your CI workflow as per your requirements.
  • Commit and Push: Save the YAML file and commit it to your repository. Push the changes to trigger the CI workflow. GitHub Actions will automatically detect the workflow file and start executing the defined steps.
  • Monitor Workflow Execution: Monitor the workflow execution from the GitHub Actions tab in your repository. You can view logs, see the status of each step, and troubleshoot any issues that may arise during the CI process.
  • By following these steps, you can set up continuous integration using GitHub Actions. This will enable automatic testing and validation of your codebase whenever changes are pushed to the repository, helping to catch issues early in the development cycle.
Configure Workflow: Open the YAML file and define the workflow using the GitHub Actions syntax. Specify the trigger event, such as pushes to specific branches or pull requests. You can also configure other event types like schedule or repository dispatch. Specify Jobs and Steps: Define one or more jobs within the workflow. Each job represents a set of steps that will be executed. For example, you can have a job for building your application, running tests, and generating code coverage reports. Set up Environment: Specify the environment for your CI workflow. This includes the operating system, programming language, and any required dependencies. GitHub Actions provides a variety of pre-configured environments, or you can create a custom environment using Docker containers. Define Steps: Within each job, define the steps that need to be executed. These can include cloning the repository, installing dependencies, running commands or scripts, executing tests, and generating artifacts. Configure Caching: To optimize the CI workflow, consider caching dependencies or build artifacts between workflow runs. This can significantly speed up subsequent executions by reusing the cached content. Add Optional Features: GitHub Actions provides additional features such as parallelism, matrix builds, and environment variables. Explore these features to enhance your CI workflow as per your requirements. Commit and Push: Save the YAML file and commit it to your repository. Push the changes to trigger the CI workflow. GitHub Actions will automatically detect the workflow file and start executing the defined steps. Monitor Workflow Execution: Monitor the workflow execution from the GitHub Actions tab in your repository. You can view logs, see the status of each step, and troubleshoot any issues that may arise during the CI process. By following these steps, you can set up continuous integration using GitHub Actions. This will enable automatic testing and validation of your codebase whenever changes are pushed to the repository, helping to catch issues early in the development cycle. Define Workflow: Inside your GitHub repository, create a new directory named .github/workflows. In this directory, create a YAML file (e.g., ci.yml) to define your CI workflow. Configure Workflow: Open the YAML file and define the workflow using the GitHub Actions syntax. Specify the trigger event, such as pushes to specific branches or pull requests. You can also configure other event types like schedule or repository dispatch. Specify Jobs and Steps: Define one or more jobs within the workflow. Each job represents a set of steps that will be executed. For example, you can have a job for building your application, running tests, and generating code coverage reports. Set up Environment: Specify the environment for your CI workflow. This includes the operating system, programming language, and any required dependencies. GitHub Actions provides a variety of pre-configured environments, or you can create a custom environment using Docker containers. Define Steps: Within each job, define the steps that need to be executed. These can include cloning the repository, installing dependencies, running commands or scripts, executing tests, and generating artifacts. Configure Caching: To optimize the CI workflow, consider caching dependencies or build artifacts between workflow runs. This can significantly speed up subsequent executions by reusing the cached content. Add Optional Features: GitHub Actions provides additional features such as parallelism, matrix builds, and environment variables. Explore these features to enhance your CI workflow as per your requirements. Commit and Push: Save the YAML file and commit it to your repository. Push the changes to trigger the CI workflow. GitHub Actions will automatically detect the workflow file and start executing the defined steps. Monitor Workflow Execution: Monitor the workflow execution from the GitHub Actions tab in your repository. You can view logs, see the status of each step, and troubleshoot any issues that may arise during the CI process. By following these steps, you can set up continuous integration using GitHub Actions. This will enable automatic testing and validation of your codebase whenever changes are pushed to the repository, helping to catch issues early in the development cycle.




To set up continuous delivery (CD) using AWS CodePipeline, you can follow these steps:

  1. Define your Application Architecture: Determine the architecture of your application, including the different components and deployment targets (e.g., Amazon EC2 instances, AWS Lambda functions, AWS Elastic Beanstalk, etc.).

  2. Create an IAM Role: Start by creating an IAM role that CodePipeline can use to access and manage AWS resources, such as your source code repository, build environment, and deployment targets. Ensure the role has the necessary permissions for these actions.

  3. Set up your Source Stage: Configure the source provider for your code repository. CodePipeline supports various source providers, including AWS CodeCommit, GitHub, and Bitbucket. Provide the necessary information to connect to your repository, such as the repository name, branch, and authentication details.

  4. Configure your Build Stage: Select the build provider you want to use, such as AWS CodeBuild. Configure the build settings, including the build environment, build specifications, and any additional build options. You can define custom build scripts or use predefined build configurations.

  5. Configure your Test Stage: Set up a stage for testing your application. You can use AWS CodeBuild, AWS CodeDeploy, or any other testing tool that integrates with CodePipeline. Define the necessary tests and configurations to ensure the quality of your application.

  6. Set up Deployment Stages: Create deployment stages for your application. This can include deploying to a staging environment for further testing or deploying to a production environment. Configure the deployment settings based on your chosen deployment targets, such as Amazon EC2, AWS Elastic Beanstalk, or AWS Lambda.

  7. Add Additional Stages: Depending on your CD requirements, you can add more stages to the pipeline. This might include manual approval stages, security and compliance checks, or any other necessary steps in your deployment process.

  8. Configure Notifications: Set up notifications to receive alerts and updates about pipeline execution. CodePipeline can send notifications to Amazon SNS, Amazon Simple Queue Service (SQS), or email, allowing you to stay informed about the status of your pipeline.

  9. Review and Create Pipeline: Review the pipeline configuration and ensure that all stages are correctly set up. Validate the settings, permissions, and integration with the selected services. Once you're satisfied, create the pipeline.

  10. Monitor and Iterate: Monitor the execution of your pipeline, review logs and error messages, and iteratively improve your CD process. Gather feedback, make adjustments, and optimize your pipeline for faster and more reliable deployments.

By following these steps, you can establish a continuous delivery workflow using AWS CodePipeline. This will enable you to automate the build, test, and deployment processes for your application, resulting in faster and more efficient software delivery.


To set up continuous delivery (CD) using AWS CodePipeline, follow these steps:

  1. Create an IAM Role: Start by creating an IAM role that will be used by CodePipeline to access and manage AWS resources. Ensure the role has the necessary permissions to interact with your source code repository, build environment, and deployment targets.

  2. Create a CodePipeline: Go to the AWS Management Console and navigate to CodePipeline. Click on "Create pipeline" to begin the setup process.

  3. Configure Pipeline Settings: Provide a name for your pipeline and select the service role you created in Step 1. Choose whether you want to start with a new pipeline or use a pipeline template. Click on "Next" to proceed.

  4. Set up Source Stage: Select the source provider for your code repository, such as AWS CodeCommit, GitHub, or Bitbucket. Provide the necessary information to connect to your repository, including repository name, branch, and authentication details. CodePipeline will automatically detect changes in your repository and trigger the pipeline accordingly.

  5. Configure Build Stage: Choose the build provider you want to use for your project, such as AWS CodeBuild. Configure the build settings, including the build environment, build specification file location, and any additional build options.

  6. Set up Deployment Stage: Select the deployment provider based on your application's deployment target, such as AWS Elastic Beanstalk, Amazon ECS, or AWS Lambda. Configure the deployment settings, including the target environment, application name, and deployment options.

  7. Add Additional Stages: Depending on your CD requirements, you can add additional stages to the pipeline. These stages can include testing, approval, or any other necessary steps before deploying to production.

  8. Review and Create Pipeline: Review the pipeline configuration to ensure everything is set up correctly. Click on "Create pipeline" to create the pipeline in AWS CodePipeline.

  9. Monitor Pipeline Execution: Once the pipeline is created, CodePipeline will automatically start executing the stages based on the changes in your source repository. You can monitor the pipeline's progress, view logs, and troubleshoot any issues from the CodePipeline console.

  10. Iterate and Improve: CD is an iterative process. Continuously monitor and improve your pipeline by incorporating feedback, automating more processes, and enhancing your testing and deployment strategies.

By following these steps, you can set up a CD workflow using AWS CodePipeline, which will automate the build, test, and deployment processes for your application.








  • Setting s3 bucket trigger on Lambda (Whenever a file insert to s3 bucket we will trigger the event)
  • To read file content from s3Bucket on lamda trigger.
  • Set cron jobs with a scheduled time inervals.
  • AWS Lamda Layers:- archeived or zip file add to bundle add libraries. 
  • To whitelist IP addresses





Compute --> Lambda --> Function

1) Author from scratch

2) Use a blueprint

3) Browse serverless app repository.


1) Author from scratch

1.1) Funtion name

1.2) Runtime -> Java, Phython, Ruby

1.3) Permissions (For Logs printing we need to enable permission for the service., for bucket put & get                                 access -> AWSLambdaExecute) 

1.4) Edit code and Test



Setting s3 bucket trigger on Lambda (Whenever a file insert to s3 bucket we will trigger the event)

Steps:-

create s3 bucket
create IAM role -> select service lambda -> attach a permission ( AWSLambdaExecute)
create lambda function -> using 1.1,1.2,1.3
Add trigger -> go to lambda funct +add trigger -> All object create event 
test  -> go to lambda funct -> monitoring -> view logs in cloud watch


How to read file content from s3 on lamda trigger.
https://youtu.be/WBgedoH3Vn4 

1) check permission + add permission (AWSLambdaExecute)

2) Modify code of lambda_trigger function (import boto3) 



Setting cron job on lambda function:- (A job after an interval of time)

1) create labda function

2) Designer page +add trigger EventBridge(Cloud watch event) 

3) Add rule +select EC2 machine

4) schedule expression +set time interval rate(5 mins) or
    cron(0 17 ? * Mon-Fri) -> 0min 17hours mon-fri every day in a year



AWS Lamda Layers:- archeived or zip file to bundle add libraries.

1) Create a lambda funct,  additional resources --> Layers +add layer ->custom layer

2) import liabraries accepted

3) create using shell-script

4) https://youtu.be/pj9svK2nfmk


1) Create a lambda function

2) Click on configurations +add URL

3) Write code to whitelist APIs. +save +deploy

4) lambda funct -> ip-address-validates -> edit environment variables


"""
-*- coding: utf-8 -*-
========================
AWS Lambda
========================
Contributor: Chirag Rathod (Srce Cde)
========================
"""

import os
import ast
import json
from ipaddress import ip_network, ip_address


def check_ip(IP_ADDRESS, IP_RANGE):
VALID_IP = False
cidr_blocks = list(filter(lambda element: "/" in element, IP_RANGE))
if cidr_blocks:
for cidr in cidr_blocks:
net = ip_network(cidr)
VALID_IP = ip_address(IP_ADDRESS) in net
if VALID_IP:
break
if not VALID_IP and IP_ADDRESS in IP_RANGE:
VALID_IP = True

return VALID_IP


def return_func(
status_code=200,
message="Invocation successful!",
headers={"Content-Type": "application/json"},
isBase64Encoded=False,
):
return {
"statusCode": status_code,
"headers": headers,
"body": json.dumps({"message": message}),
"isBase64Encoded": isBase64Encoded,
}


def lambda_handler(event, context):
IP_ADDRESS = event["requestContext"]["http"]["sourceIp"]
IP_RANGE = ast.literal_eval(os.environ.get("IP_RANGE", "[]"))
METHOD = event["requestContext"]["http"]["method"]

if not IP_RANGE:
return return_func(status_code=500, message="Unauthorized")

VALID_IP = check_ip(IP_ADDRESS, IP_RANGE)

if not VALID_IP:
return return_func(status_code=500, message="Unauthorized")

if METHOD == "GET":
return return_func(status_code=200, message="GET method invoked!")

if METHOD == "POST":
return return_func(status_code=200, message="POST method invoked!")

return return_func()




AWS Lambda Cheatsheet

This cheatsheet is probably based on Python


Runtime Versions
TypeVersionsAWS SDKOperating System
Node.jsnodejs10.x(JavaScript) 2.712.0Amazon Linux 2
nodejs12.x(JavaScript) 2.712.0Amazon Linux 2
Javajava11(JDK) amazon-corretto-11Amazon Linux 2
java8.a12(JDK) amazon-corretto-8Amazon Linux 2
java8(JDK) java-1.8.0-openjdkAmazon Linux
Pythonpython3.8(Python) boto3-1.14.40 botocore-1.17.40Amazon Linux 2
python3.7(Python) boto3-1.14.40 botocore-1.17.40Amazon Linux
python3.6(Python) boto3-1.14.40 botocore-1.17.40Amazon Linux
python2.7(Python) boto3-1.14.40 botocore-1.17.40Amazon Linux
Rubyruby2.7(Ruby) 3.0.3Amazon Linux 2
ruby2.5(Ruby) 3.0.3Amazon Linux
.NET Coredotnetcore3.1--Amazon Linux 2
dotnetcore2.1--Amazon Linux
Gogo1.x--Amazon Linux
Custom Runtimeprovided.al2--Amazon Linux 2
provided--Amazon Linux

Available Operating System
TypeImageKernel
Amazon Linuxamzn-ami-hvm-2018.03.0.20181129-x86_64-gp24.14.171-105.231.amzn1.x86_64
Amazon Linux 2Custom4.14.165-102.205.amzn2.x86_64

Settings | Limits
DescriptionSettings | Limits | ExplainedCan be increased
Writable Path & Space/tmp/ 512 MB--
Default Memory & Execution Time128 MB Memory
3 Second Timeout
--
Max Memory & Execution Time10,240 MB (1 MB increments)
900 seconds (15 Minutes) Timeout
--
Number of processes and threads (Total)1024--
Number of File descriptors (Total)1024--
Maximum deployment package size
50 MB (zipped, direct upload)
250 MB (unzipped, including layers)
--
Container image code package size
10 GB
--
Maximum deployment package size for console editor3 MB--
Total size of deployment package per region75 GBCan be increased upto Terabytes
Maximum size of environment variables set4 KB--
Maximum function Layers5 layers--
Environment variables size4 KB--
Maximum test events (Console editor)10--
Invocation payload Limit (request and response)6 MB (synchronous)
256 KB (asynchronous)
--
Elastic network interpaces per VPC250Can be increased upto Hundreds
Lambda Destinations
  • It sends invocation records to a destination (SQS queue, SNS topic, Lambda function, or EventBridge event bus) when the lambda function is invoked asynchronously
  • It also supports stream invocation
Can be increased upto Hundreds
Monitoring tools
  • (Default) CloudWatch Logs stream
  • AWS X-Ray
  • CloudWatch Lambda Insights (preview)
--
VPC
  • When you enable VPC, your Lambda function will lose default internet access
  • If you require external internet access for your function, ensure that your security group allows outbound connections and that your VPC has a NAT gateway
--
Concurrency
  • Concurrent Execution refers to the execution of number of function at a given time. By default the limit is 1000 across all function within a given region
  • AWS Lambda keeps 100 for the unreserved function
  • So, if there are 1000 then you can select from 900 and reserve concurrency for selected function and rest 100 is used for the unreserved function
Can be increased upto Hundreds of thousands
DLQ (Dead Letter Queue)
  • Failed Lambda is invoked twice by default and the event is discarded
  • DLQ instruct lamnda to send unprocessed events to AWS SQS or AWS SNS
  • DLQ helps you troubleshoot and examine the unprocessed request
--
Throttle
  • Throttle will set reserved concurrency of the function to zero and it will throttle all future invocation
  • If the function is throttled then it will fail to run
  • If the fucntion is ran from Lambda console then it will throw "Calling the Invoke API failed with message: Rate Exceeded."
--
File system
  • File system will allow you to add Amazon EFS file system, which provides distributed network storage for the instances of the function
  • To connect to the file system, you need to connect your lambda function to VPC
--
State machines
  • Step Functions state machines which orchestrate this function
  • The Step Functions state machines page lists all state machines in the current AWS region with at least one workflow step that invokes a Lambda function
--
Database proxies
  • Database proxy manages a pool of database connections and relays queries from a function
  • It uses Secrets Manager secret to access credentials for a database
  • To connect to the file system, you need to connect your lambda function to VPC
--

Execution Role (Common Execution Role Available)
AWSLambdaBasicExecutionRoleGrants permissions only for the Amazon CloudWatch Logs actions to write logs.
AWSLambdaKinesisExecutionRoleGrants permissions for Amazon Kinesis Streams actions, and CloudWatch Logs actions.
AWSLambdaDynamoDBExecutionRoleGrants permissions for DynamoDB streams actions and CloudWatch Logs actions.
AWSLambdaVPCAccessExecutionRoleGrants permissions for Amazon Elastic Compute Cloud (Amazon EC2) actions to manage elastic network interfaces (ENIs).
AWSXrayWriteOnlyAccessGrants permission for X-ray to to upload trace data to debug and analyze.

Add new permission
import boto3
client = boto3.client('lambda')

# Role ARN can be found on the top right corner of the Lambda function
response = client.add_permission(
    FunctionName='string',
    StatementId='string',
    Action='string',
    Principal='string',
    SourceArn='string',
    SourceAccount='string',
    EventSourceToken='string',
    Qualifier='string'
)

Execution | Invoke | Tweaks
A Lambda can invoke another LambdaYes
A Lambda in one region can invoke another lambda in other regionYes
A Lambda can invoke same LambdaYes
Exceed 15 minutes execution timeYes (Can Tweak around)
How to exceed 5 minutes execution timeSelf-Invoke , SNS, SQS
Asynchronous ExecutionYes (Async Exec)
Invoke same Lamba with different versionYes
Setting Lambda Invoke Max Retry attempt to 0Yes

TriggersDescriptionRequirement
API GatewayTrigger AWS Lambda function over HTTPSAPI Endpoint name
API Endpoint Deployment Stage
Security Role
AWS IoTTrigger AWS Lambda for performing specific action by mapping your AWS IoT Dash Button (Cloud Programmable Dash Button)DSN (Device Serial Number)
Alexa Skill KitTrigger AWS Lambda to build services that give new skills to Alexa--
Alexa Smart HomeTrigger AWS Lambda with desired skillApplication ID (Skill)
Application Load BalancerTrigger AWS Lambda from ALBApplication Load Balancer
Listener (It is the port that ALP receivce traffice)
Host
Path
CloudFrontTrigger AWS Lambda based on difference CloudFront event.CloudFront distribution, Cache behaviour, CloudFront event (Origin request/response, Viewer request/response).
To set CloudFront trigger, one need to publish the version of Lambda.
Limitations:
Runtime is limited to Node.js 6.10
/tmp/ space is not available
Environment variables, DLQ & Amazon VPC's cannot be used
CloudWatch EventsTrigger AWS Lambda on desired time interval (rate(1 day)) or on the state change of EC2, RDS, S3, Health.Rule based on either Event Pattern (time interval)
Schedule Expression (Auto Scaling on events like Instance launch and terminate
AWS API call via CloudTrail
CloudWatch LogsTrigger AWS Lambda based on the CloudWatch LogsLog Group Name
Code CommitTrigger AWS Lambda based on the AWS CodeCommit version control systemRepository Name
Event Type
Cognito Sync TriggerTrigger AWS Lambda in response to event, each time the dataset is synchronizedCognito Identity Pool dataset
DynamoDBTrigger AWS Lambda whenever the DynomoDB table is updatedDynamoDB Table name
Batch Size(The largest number of records that AWS Lambda will retrieve from your table at the time of invoking your function. Your function receives an event with all the retrieved records)
KinesisTrigger AWS Lambda whenever the Kinesis stream is updatedKinesis Stream
Batch Size
S3Trigger AWS Lambda in response to file dropped in S3 bucketBucket Name
Event Type (Object Removed, Object Created)
SNSTrigger AWS Lambda whenever the message is published to Amazon SNS TopicSNS Topic
SQSTrigger AWS Lambda on message arrival in SQSSQS queue
Batch size
Limitation: It only works with Standard queue and not FIFO queue

Troubleshooting
ErrorPossible ReasonSolution
File "/var/task/lambda_function.py", line 2, in lambda_handler
return event['demoevent']
KeyError: 'demoevent'
Event does not have the key 'demoevent' or either misspelledMake sure the event is getting the desired key if it is receiving the event from any trigger.
Or if the not outside event is passed than check for misspell.
Or check the event list by printing event.
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the GetParameters operation: User: arn:aws:dummy:1234assumed-role/role/ is not authorized to perform: ssm:GetParameters on resource: arn:aws:ssm:dummyLacks Permission to accessAssign appropriate permission for accessibility
ImportError: Missing required dependencies [‘module']Dependent module is missingInstall/Upload the required module
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "host.dummy.region.rds.amazonaws.com" to address: Name or service not knownRDS Host is unavailableMake sure the RDS instance is up and running.
Double check the RDS hostname
[Errno 32] Broken pipeConnection is lost (Either from your side or may be some problem from AWS)
While invoking another Lambda, if the payload size exceed the mentioned limit
Make sure if you are passing the payload of right size.
Check for the connection.
Unable to import module ‘lambda_function/index’ No module named ‘lambda_function'Handler configuration is not matching the main file nameUpdate the handler configuration as per your filename.function_name
OperationalError: (psycopg2.OperationalError) terminating connection due to administrator command SSL connection has been closed unexpectedlyRDS/Database System has been rebooted.
In a typical web application using an ORM (SQLAlchemy) Session, the above condition would correspond to a single request failing with a 500 error, then the web application continuing normally beyond that. Hence the approach is “optimistic” in that frequent database restarts are not anticipated.
Give second try
Error code 429The function is throttled. Basically the reserved concurrency is set to zero or it have reach the account level throttle.
(The function that is invoked synchronous and if it is throttled then it will return 429 error. If the lambda function is invoked asynchronously and if it is throttled then it will retry the throttled event for upto 6 hours.)
Check for the reserved concurrency limit or throttle status for the individual function. Or check for the account level concurrent execution limit

AWS Lambda CLI commands


Add Permission

It add mention permission to the Lambda function

Syntax

  add-permission
--function-name <value>
--statement-id <value>
--action <value>
--principal <value>
[--source-arn <value>]
[--source-account <value>]
[--event-source-token <value>]
[--qualifier <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

add-permission --function-name functionName --statement-id role-statement-id --action lambda:CreateFunction --principal s3.amazonaws.com

Create Alias

It creates alias for the given Lambda function name

Syntax

  create-alias
--function-name <value>
--name <value>
--function-version <value>
[--description <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

create-alias --function-name functionName --name fliasName --function-version version

Create Event Source Mapping

It identify event-source from Amazon Kinesis stream or an Amazon DynamoDB stream

  create-event-source-mapping
--event-source-arn <value>
--function-name <value>
[--enabled | --no-enabled]
[--batch-size <value>]
--starting-position <value>
[--starting-position-timestamp <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

create-event-source-mapping --event-source-arn arn:aws:kinesis:us-west-1:1111 --function-name functionName --starting-position LATEST

Create Function

It creates the new function

Syntax

  create-function
--function-name <value>
--runtime <value>
--role <value>
--handler <value>
[--code <value>]
[--description <value>]
[--timeout <value>]
[--memory-size <value>]
[--publish | --no-publish]
[--vpc-config <value>]
[--dead-letter-config <value>]
[--environment <value>]
[--kms-key-arn <value>]
[--tracing-config <value>]
[--tags <value>]
[--zip-file <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

create-function --function-name functionName --runtime python3.6 --role arn:aws:iam::account-id:role/lambda_basic_execution
 --handler main.handler

Delete Alias

It deletes the alias

Syntax

  delete-alias
--function-name <value>
--name <value>
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

delete-alias --function-name functionName --name aliasName

Delete Event Source Mapping

It deletes the event source mapping

Syntax

  delete-event-source-mapping
--uuid <value>
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

delete-event-source-mapping --uuid 12345kxodurf3443

Delete Function

It will delete the function and all the associated settings

Syntax

  delete-function
--function-name <value>
[--qualifier <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

delete-function --function-name FunctionName

Get Account Settings

It will fetch the user’s account settings

Syntax

  get-account-settings
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Get Alias

It returns the desired alias information like description, ARN

Syntax

  get-alias
--function-name <value>
--name <value>
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

get-alias --function-name functionName --name aliasName

Get Event Source Mapping

It returns the config information for the desired event source mapping

Syntax

  get-event-source-mapping
--uuid <value>
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

get-event-source-mapping --uuid 12345kxodurf3443

Get Function

It returns the Lambda Function information

Syntax

  get-function
--function-name <value>
[--qualifier <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

get-function --function-name functionName

Get Function Configuration

It returns the Lambda function configuration

Syntax

  get-function-configuration
--function-name <value>
[--qualifier <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

  get-function-configuration --function-name functionName

Get Policy

It return the linked policy with Lambda function

Syntax

  get-policy
--function-name <value>
[--qualifier <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

get-policy --function-name functionName

Invoke

It invoke the mention Lambda function name

  invoke
--function-name <value>
[--invocation-type <value>]
[--log-type <value>]
[--client-context <value>]
[--payload <value>]
[--qualifier <value>]

Example

invoke --function-name functionName

List Aliases

It return all the aliases that is created for Lambda function

Syntax

  list-aliases
--function-name <value>
[--function-version <value>]
[--marker <value>]
[--max-items <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

  list-aliases --function-name functionName

List Event Source Mappings

It return all the list event source mappings that is created with create-event-source-mapping

Syntax

  list-event-source-mappings
[--event-source-arn <value>]
[--function-name <value>]
[--max-items <value>]
[--cli-input-json <value>]
[--starting-token <value>]
[--page-size <value>]
[--generate-cli-skeleton <value>]

Example

  list-event-source-mappings --event-source-arn arn:aws:arn --function-name functionName

List Functions

It return all the Lambda function

Syntax

  list-functions
[--master-region <value>]
[--function-version <value>]
[--max-items <value>]
[--cli-input-json <value>]
[--starting-token <value>]
[--page-size <value>]
[--generate-cli-skeleton <value>]

Example

  list-functions --master-region us-west-1 --function-version ALL

List Tags

It return the list of tags that are assigned to the Lambda function

Syntax

  list-tags
--resource <value>
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

  list-tags --resource arn:aws:function

List Versions by functions

It return all the versions of the desired Lambda function

Syntax

  list-versions-by-function
--function-name <value>
[--marker <value>]
[--max-items <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

list-versions-by-function --function-name functionName

Publish Version

It publish the version of the Lambda function from $LATEST snapshot

Syntax

  publish-version
--function-name <value>
[--code-sha-256 <value>]
[--description <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

  publish-version --function-name functionName

Remove Permission

It remove the single permission from the policy that is linked with the Lambda function

Syntax

 remove-permission
--function-name <value>
--statement-id <value>
[--qualifier <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

 remove-permission --function-name functionName --statement-id role-statement-id

Tag Resource

It creates the tags for the lambda function in the form of key-value pair

Syntax

  tag-resource
--resource <value>
--tags <value>
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

tag-resource --resource arn:aws:arn --tags {‘key’: ‘pair’}

Untag Resource

It remove tags from the Lambda function

Syntax

 untag-resource
--resource <value>
--tag-keys <value>
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

untag-resource --resource arn:aws:complete --tag-keys [‘key1’, ‘key2’]

Update Alias

It update the alias name of the desired lambda function

Syntax

  update-alias
--function-name <value>
--name <value>
[--function-version <value>]
[--description <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

update-alias --function-name functionName --name aliasName

Update Event Source Mapping

It updates the event source mapping incase you want to change the existing parameters

Syntax

  update-event-source-mapping
--uuid <value>
[--function-name <value>]
[--enabled | --no-enabled]
[--batch-size <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

update-event-source-mapping --uuid 12345kxodurf3443

Update Function Code

It updates the code of the desired Lambda function

Syntax

  update-function-code
--function-name <value>
[--zip-file <value>]
[--s3-bucket <value>]
[--s3-key <value>]
[--s3-object-version <value>]
[--publish | --no-publish]
[--dry-run | --no-dry-run]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

update-function-code --function-name functionName

Update Function Configuration

It updates the configuration of the desired Lambda function

Syntax

  update-function-configuration
--function-name <value>
[--role <value>]
[--handler <value>]
[--description <value>]
[--timeout <value>]
[--memory-size <value>]
[--vpc-config <value>]
[--environment <value>]
[--runtime <value>]
[--dead-letter-config <value>]
[--kms-key-arn <value>]
[--tracing-config <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Example

update-function-configuration --function-name functionName


Comments

Popular posts from this blog

DataPipeline-with-DocumentAI-Form-Parser