Watering plants using an AWS IoT Button

In this post, I explain how you can use an AWS IoT Enterprise button to turn on and off a TP-Link HS100 Smart Plug using Lambda functions. The Smart Plug in turn powers up a pond pump which pumps water to the plants in my (wife’s) balcony garden.

The AWS IoT Button is a programmable button based on the Amazon Dash Button hardware. This simple Wi-Fi device is easy to configure and designed for developers to get started with AWS IoT CoreAWS LambdaAmazon DynamoDBAmazon SNS, and many other Amazon Web Services without writing device-specific code.

You can code the button’s logic in the cloud to configure button clicks to count or track items, call or alert someone, start or stop something, order services, or even provide feedback. For example, you can click the button to unlock or start a car, open your garage door, call a cab, call your spouse or a customer service representative, track the use of common household chores, medications or products, or remotely control your home appliances.

The AWS IoT Enterprise Button (next to a 100 fils coin from Bahrain)

The high level steps involved in building this solution are as follows:

  1. Figure out how the Smart Plug REST APIs work
  2. Write the Lambda functions to turn on and off the plug
  3. Configure your AWS IoT Button
  4. Setting up the garden pump

Figure out how the Smart Plug REST APIs work

The APIs for the TP-Link smart smart plug aren’t quite documented. With some help from the internet, I figured out how my plug responds to API calls, using an app called Insomnia. (You could also use Postman)

  1. First, send a POST to https://wap.tplinkcloud.com in JSON format to authenticate and get a token. The UUID key can contain any arbitrary value. You can generate a UUIDv4 key here. Here is an example:
    [cc lang=”yaml”]
    {
    “method”:”login”,
    “params”:{
    “appType”: “Kasa_Android”,
    “cloudUserName”: “yourkasausername@email.com”,
    “cloudPassword”: “your_password_here”,
    “terminalUUID”: “de51d001-8286-4f2d-895d-e3d777655882”
    }
    }
    [/cc]
  2. You should receive a response that contains a token. Copy this value to your clipboard.
  3. Send another POST request calling the getDeviceList method. This time, append the token parameter to the URL as follows: https://wap.tplinkcloud.com?token=ca0f00de-AbCdEfOASrmMW7Xqracf69c
    In the body of the request, enter the following JSON:
    [cc lang=”yaml”]
    {
    “method”:”getDeviceList”
    }
    [/cc]
  4. There are two important bits of information you’ll need from the response to getDeviceList, namely deviceId and appServerUrl.
  5. Form another POST request, this time send the request to the appServerUrl that was returned in the previous step, and append the token you received in step 2. For example: https://eu-wap.tplinkcloud.com?token=ca0f00de-AbCdEfOASrmMW7Xqracf69c[cc lang=”yaml”]
    {
    ‘method’: ‘passthrough’,
    ‘params’: {
    ‘deviceId’: ‘your_device_id_here’,
    ‘requestData’: ‘{
    “system”: {
    “set_relay_state”: {
    “state”: 1
    }
    }
    }’
    }
    }
    [/cc]
  6. You will notice that setting the state key to 1, turns on the smart plug. If you send the same request with state set to 0, it turns off the plug. You now know everything you need to start writing AWS Lambda functions to control your smart plug.

Write the AWS Lambda function that toggles your plug status when invoked

In order to simplify things, I’ll just create one Lambda function that toggles the relay_state of the smart plug. This means that the Lambda function will turn on the plug if it is found to be off, and turn off the plug if it is found to be on.

  1. Set the environment variables. In the code sample I’ve provided below, the Kasa app password is expected to be an OS environment variable. You’ll need to define that in the AWS Lambda console. This is not ideal, but there aren’t many ways to make it work. You could also put the deviceId as an environment variable instead of determining it in code, as it remains constant. However, in order for the code to be re-usable I’m determining it at each execution in the code block.
  2. Write the lambda_function.py code:
  3. Test your Lambda function, using the Test feature on the top of the AWS Lambda console. Your plug should toggle on and off with each execution.

Configure your AWS IoT button

It’s time to now configure your AWS IoT button to trigger the Lambda function you just created.

  1. If you are using an AWS IoT Enterprise button, use the AWS IoT 1-Click app on your mobile device. The app is available for Apple and Android. For regular IoT buttons  you could use other methods as described in the AWS documentation.
  2. Connect the button to your WiFi network, and configure IoT button to trigger the Lambda function you created in the previous section.
  3. Much of the configuration is automated, the certificates will be configured and IoT rules required will be in place.
  4. Verify that the AWS Lambda console shows the AWS IoT trigger. You should also see this in the IoT 1-Click mobile app.
  5. Test your AWS Iot Button. When you click the button, the smart plug should toggle ON or OFF with each button press. The LED on the IoT Button blinks white when it’s working and a solid green when the command executes successfully.

Setting up the Garden Pump

Now for the fun part. You’ll need:

  1. A plastic reservoir with lid. You can get a large plastic container from IKEA or Home Box. Make sure it doesn’t have holes (for wheels) etc, as you’ll be filling this up with water. A lid is desirable to prevent water loss by evaporation.
  2. submersible pond/aquarium pump. Pro tip: Get one with higher rating if you’ll run a longer length of hose.
  3. Length of hose and fittings/accessories. I bought this from Amazon and it had all that I needed.
  4. A drill or other tool that lets you make holes on the lid of the reservoir.
  5. A electrical outlet close to the reservoir (or suitable electrical extension).
  6. And in case you forgot – you’ll need a TP-Link HS100 Smart Plug and an AWS IoT button.
  7. WiFi: The smart plug needs to be within range of your Home WiFi. Note: Your AWS IoT Button does NOT have to be on the same WiFi, it can even  be in another country and you could remotely water your plants over the internet!

Here’s a video detailing the setup. Have fun!

Important note/disclaimer: Water is a good conductor of electricity and there is risk of serious injury/death by electrocution if you do not follow common precautions. Do not put your hand into the reservoir with the power turned on. Always unplug the pump from the electric plug before putting any part of your body inside the reservoir. Use good quality pumps and wiring. Do not immerse any electrical component inside the reservoir except a submersible pump. Always follow manufacturer’s instructions. I cannot be held responsible or liable for any loss, injury or death that occurs by following these instructions.

Remotely control the robot using a chatbot, serverless compute and IoT – Part 2

Update: RekogRobot now has a 3D printed chassis! I use a graspinghand mount for the camera module.

In part 1 of this two-post series, I showed you how I built a robot powered by Raspberry Pi that moves about on its wheels, looks at objects and speaks what it sees, using Amazon Rekognition, Amazon S3 and Amazon Polly services from Amazon Web Services (AWS). If you haven’t already read that post I encourage you to go back and have a read. This post builds additional capabilities to that robot – letting you control the robot’s movement and functionality using your own voice or text using an Amazon Lex chatbot. Lex is the same technology behind Amazon Alexa that powers your Amazon Echo devices.

.

.

Let’s take a look at the high level design of how this looks like (and recap on some basics):

Robot architecture diagram

An introduction to the main components

Conversational Bot

The user interacts with a conversational bot powered by Amazon Lex. Amazon Lex provides advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging user experiences and lifelike conversational interactions. Amazon Lex brings the same deep learning technologies that power Amazon Alexa to any developer, enabling you to quickly and easily build sophisticated, natural language, conversational bots (“chatbots”). Our Lex bot ‘RekogRobot‘ is configured to understand two intents – Move and SeeObject.

Internet of Things

In addition, the robot is configured as an IoT (Internet of Things) device on AWS IoT. This allows the user to securely and reliably issue commands to remotely control the robot.

If the user intends the robot to Move, the Lex bot calls an AWS Lambda function named MoveRobot. Based on the direction of travel the user commanded, the MoveRobot function updates the device’s IoT shadow using AWS IoT with the command to move, along with the direction of travel. Similary, if the user had intended the robot to see objects, the Lex bot calls another Lambda function named SeeRobot, which updates the device’s IoT shadow using AWS IoT with the command to SEE. I’ve also written a Python 3 script that runs on the robot that uses MQTT to connect to the device’s IoT shadow on AWS and look for commands (such as ‘see’ or ‘move’). If a command is found, it is executed immediately and the status reported back to the shadow.

Serverless functions

AWS Lambda lets you run code without provisioning or managing servers (serverless computing). You pay only for the compute time you consume – there is no charge when your code is not running. In this project, we just upload our python code and Lambda takes care of everything required to run and scale our code with high availability. Lambda supports multiple other languages and you can include libraries as well. In our project, MoveRobot and SeeRobot are both Lambda functions written in Python 3.6.

I have made all the code available as a package for download so you can start building right away.

When you extract the package you will find three folders:

  • lambda_functions: This contains the code that you will go into your Lambda functions. You will find two subdirectories – MoveRobot and SeeRobot, each of these is a separate Lambda function. You can put the files for each function into separate zip archives and import the code in the AWS Lambda console.
  • raspberry_pi: This folder contains the python3 code that needs to run on your raspberry pi. This folder also include the MP3 files with synthesized speech for robot status messages like “Turning left”, “Robot is recognizing objects”, etc.
  • lex_bot: This folder contains a json export of the Lex bot I’ve used. You can use this file or build the bot from scratch using the instructions below.

Create the IAM role and configure access

  1. Create a new IAM role. Let us call it Lambda_IoT_role.
  2. For services that will use this role, choose Lambda.
  3. Attach the following policies to this role: AWSLambdaBasicExecutionRole, AWSIoTDataAccess

IAM role for Lambda

Configure AWS IoT

  1. From the AWS Management Console, go to Services, and open AWS IoT Core.
  2. Go to Things, and click Register a thing.AWS IoT - Register a thing
  3. Choose Create a single thing.AWS IoT - Create a single thing
  4. Give your thing (your robot) a name and click Next. We’re calling it Rekogrobot in this example.
  5. Click Create Certificate. This is used to securely authenticate your robot when it communicates with AWS IoT.
  6. Download the IoT device certificate, the public key, private key, and also the root CA certificate. It is important to download all these files now as the private and public keys will not be able to retrieve once you leave this page. The root CA certificate may open text in a new browser tab. If this happens simply copy the entire block of text and save it in a file called root_ca.pem. After downloading, click the Activate button.Download IoT certificate
  7. Next, go to Secure > Policies and click Create a Policy.
  8. Give the policy a name. I’ll call it RobotPolicy. In the Action box enter iot:* and in the Resource ARN, enter *, and choose Allow as the EffectNote that this allows the robot to perform all iot actions unrestricted for the purpose of this experiment. In the real world, you will have to restrict this policy further.IoT Policy
  9. Next, you need to attach the policy you just created to the thing certificate. To do this, go to Secure > Certificates, right-click on the robot’s certificate and choose Attach policy.Attach IoT Policy to Thing Certificate
  10. Select the RobotPolicy policy and choose Attach.
  11. Go to Manage > Things, and choose Rekogrobot. Click Interact in the sidebar. Make a note of the HTTPS endpoint for updating the thing shadow, as well as the various MQTT topics. You’re going to need these in the next section.

Create the Lambda functions MoveRobot and SeeRobot

You will need to create two Lambda functions – MoveRobot and SeeRobot.

MoveRobot: Called by the Lex bot that you will create when it determines the intent of the user is to make the robot move. The Lex bot will have a slot named ‘Direction’ which will contain the intended direction of travel for the robot (Left, Right, Forward, Backward, Stop). The MoveRobot Lambda function will update the IoT device shadow for the robot so the shadow listener code on the Raspberry Pi can read the updates and perform actions.

  1. On the AWS Management Console, navigate to AWS Lambda. Click Create Function.
  2. Choose Author from scratch. Enter the name as MoveRobot. Runtime as Python 3.6. Choose the IAM role you created in the first section of this post. Click Create Function.
  3. This Lambda function does not require any triggers. It will be invoked by Lex.
  4. The MoveRobot and SeeRobot Lambda functions each consists of two files: lambda_function.py and shadow_updater.py. In the Designer view, either copy and paste the code from the gist provided in the next section below into two new files or copy the contents of /lambda_functions/MoveRobot/ folder into a zip file and upload the zip file directly into the Lambda console (Choose Upload .ZIP file in the drop-down under Code Entry Type in the Function code section). Here is how the Function code should look like when you’re done.
  5. Scroll further down and set the environment variables. AWS_IOT_MQTT_HOST should contain your thing shadow HTTPS endpoint that you made note of after configuring AWS IoT.
  6. Repeat steps 2 to 5 for the SeeRobot function. The code is available in /lambda_functions/SeeRobot/ folder or can be copied from the gist below.

Explanation of the code

For convenience, I have commented the code with details in the gist below. Feel free to post a comment if something isn’t clear.

MoveRobot
lambda_function.py

shadow_updater.py

SeeRobot
lambda_function.py

shadow_updater.py

You’ve now created two Lambda functions MoveRobot and SeeRobot. Let’s see how we can tie this up together with Lex.

Create a Lex bot

    1. Open the Amazon Lex on and AWS Management Console. Choose to create a Custom bot.
    2. Give your Lex bot a name. I’ve called mine Rekogrobot.
    3. Create two intents: Move and SeeObject.
    4. For the intent Move:
      • Create some sample utterances like the below:
      • Create a slot named Direction.
      • Create a slot type named RobotDirections. Populate the Slot type with the following values:
      • Configure the Fulfillment with a Lambda Function and choose the MoveRobot function. Choose Latest under Version or alias.
    5. For the intent SeeObject:
      • Create some suitable utterances.
      • Configure the Fulfillment with a Lambda Function and choose the MoveRobot function. Choose Latest under Version or alias.

While configuring your Lex bot, if you come across the below error, you may need to edit your Lambda function policy.

You can manually edit the function policy using the AWS Command-line interface (CLI) by using the following commands:

C:\Users\shijaza>aws lambda add-permission --region eu-west-1 --function-name MoveRobot --statement-id 1 --principal lex.amazonaws.com --action lambda:InvokeFunction --source-arn arn:aws:lex:eu-west-1:1xxxxxxxx:intent:Move:*
{
"Statement": "{\"Sid\":\"3\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"lex.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:eu-west-1:1xxxxxxxx:function:MoveRobot\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:lex:eu-west-1:1xxxxxxxx:intent:Move:*\"}}}"
}

C:\Users\shijaza>aws lambda add-permission --region eu-west-1 --function-name SeeRobot --statement-id 1 --principal lex.amazonaws.com --action lambda:InvokeFunction --source-arn arn:aws:lex:eu-west-1:1xxxxxxxx:intent:SeeObject:*
{
"Statement": "{\"Sid\":\"1\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"lex.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:eu-west-1:1xxxxxxxx:function:SeeRobot\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:lex:eu-west-1:1xxxxxxxx:intent:SeeObject:*\"}}}"
}

Preparing your robot on the Raspberry Pi

These steps assume you’ve successfully built the robot as per the instructions in Part 1.

You will need python3 to run the supplied code. You would already have python3 installed on your Raspberry Pi if you completed Part 1. Just a reminder – you can install python3 by typing sudo apt-get install python3

You also need to install the following packages:

The following packages are also required, but you’d already have these installed if you completed the steps in part 1 of this series.

To install a python3 package, make sure to use pip3. Example: pip3 install paho-mqtt

Copy the credentials to the ‘thing’ (the Raspberry Pi)

Create a directory where you want to run the Python script on the Raspberry Pi. Copy the thing certificate, private key file, public key file and root CA certificate .pem file that you obtained when you created your thing on the AWS IoT console.

Setting up the environment variables

Type the following commands to set the OS environment variables on the Raspberry Pi.

You are specifying the path/file names of the thing certificate, the private key file, the public key file, the root CA pem file, the thing shadow https endpoint (MQTT host), the default port numbers, the thing name and client ID that you configured in the AWS IoT console.

export AWS_IOTCERTIFICATE_FILENAME="e94de4a864-certificate.pem.crt"
export AWS_PRIVATE_KEY_FILENAME="e94de4a864-private.pem.key"
export AWS_PUBLIC_KEY_FILENAME="e94de4a864-public.pem.key"
export AWS_IOT_ROOT_CA_FILENAME="root_ca.pem"
export AWS_IOT_MQTT_HOST="abc123xyz.iot.eu-west-1.amazonaws.com"
export AWS_IOT_MQTT_PORT=8883
export AWS_IOT_MQTT_PORT_UPDATE=8443
export AWS_IOT_MQTT_CLIENT_ID="Rekogrobot"
export AWS_IOT_THING_NAME="Rekogrobot"
Copying the script to the Raspberry Pi

Copy the shadow_listener.py in the /raspberry_pi folder to the same folder your Raspberry Pi and execute it by typing python3 shadow_listener.py

I hope you enjoyed this post. Please share your feedback in the comments section below!

Building a robot with computer vision and speech

In this two-part series, I show you how I built a robot using a Raspberry Pi, the CamJam EduKit #3, and Amazon Web Services.

First, let’s take a look at a demo that describes the robot’s capabilities:

https://youtu.be/4GH_0LEwjPo

The robot works in two modes:

  1. ‘Remote Control’ mode: The robot’s navigation can be controlled remotely through a web-based console, and it can be made to “speak” out what it “sees”.
  2. Chatbot control: I am currently working on this. Control the robot using your own voice and text based chat. This will be featured in part 2 of this post when I’m ready.

In this post, I explain the physical build of the robot as well as preparation of the operating system and software.

Components required

Before we begin, let’s take a look at what components went into my robot:

Raspberry Pi 3 Model B

Available here and on Amazon.com

The lower-priced Raspberry Pi Zero W could also work but it will be complicated to connect the speakers as the Pi Zero does not have an audio jack.

CamJam EduKit #3 – Robotics

I found this kit extremely useful. It has (nearly) all the components you need to get started building a robot instead of having to figure out everything by yourself. The EduKit #3 includes:

  • 2 x DC motors 
  • 1 x DC motor controller board
  • 2 x wheels
  • 1 x ball castor (‘front wheel’ of the robot)
  • 1 x small breadboard
  • 1 x battery box for 4 AA batteries to drive the motor
  • 1 x ultrasonic distance sensor
  • A line follower sensor (not required for this project)
  • Resistors and jumper cables

You can buy one at thePiHut

Power bank – small size

Batteries

You probably already have one of these laying around. This is used to power your Raspberry Pi. I used an old Nokia power bank.

You will also need four (4) AA batteries.

Powered speaker with 3.5mm jack, and an audio Cable

Your robot will speak through this. A rechargeable/powered speaker will be needed. Try to get a light one so that you do not add stress to the motors. I used a Nokia Bluetooth speaker that I had laying around.

Raspberry Pi Camera Module

Your robot will see using this. 

You can buy one here or on Amazon.com

microUSB cable

To connect your power bank to your Raspberry Pi. I recommend a very short cable, with right angle connectors to save space inside the chassis. Speaking of which…

A chassis – a plastic or cardboard box, OR access to a 3D Printer.

You’ll need a chassis for your robot. Print one or use the cardboard box that came with the CamJam EduKit (this is what I did).  Use your imagination!

Amazon Web Services (AWS) account.

Create one at aws.amazon.com if you don’t already have one. We’ll be using Amazon S3, Amazon Polly, and Amazon Rekognition for this project.

Assembling your robot

I’ve put together a short video on the components that went into the robot. This is to supplement the already detailed documentation available on the CamJam website.

In addition to the CamJam components, you’ll need to connect a Raspberry Pi camera module, mount the camera on a ZeroView camera mount minus the Pi Zero (or other mounting arrangement), and connect a speaker to the 3.5mm headphone jack.

https://youtu.be/oyg_JdpbKf4

Configuring the Raspberry Pi

Now that you have finished putting together your robot, it’s time to prepare your Raspberry Pi. Below are some of the things you’ll need to do. You’ll find steps for these in the documentation for Raspberry Pi – I’ve added links.

  1. Configure the Raspberry Pi with a static IP on your wireless network. [documentation] You’ll need internet access to download updates and some of the packages.
  2. Enable SSH (recommended for convenience) [documentation]
  3. Install the latest updates. [documentation]
  4. Install Python 3 [documentation]. The code provided is not compatible with Python 2.x.
  5. You may want to change the default hostname and change the default password.
  6. Download and install webbot on your Pi from GitHub. We’ll be modifying code from this project.
  7. Install pygame on your Raspberry Pi [documentation]
  8. Enable the camera interface on your Raspberry Pi. (Tip: type sudo raspi-config)
  9. While you’re there, change the audio configuration of the Pi so that it plays through the speakers you’ve connected to the 3.5mm socket, instead of the HDMI port. [documentation]
  10. You can also set the time and timezone in raspi-config. (Tip: look under Localization/Localisation options)
  11. Sign up for an Amazon Web Services (AWS) account if you don’t already have one.
  12. Install the AWS Command Line Interface (CLI) on the Raspberry Pi. [documentation]
  13. Install boto3. That’s the Amazon Web Services SDK for Python.
  14. On the machine that you’re using to SSH into the Raspberry Pi, install an SCP client (like WinSCP) if you don’t already have it installed. This is not a required step, but it will make it much easier for you to edit code on your PC and have it synchronized with your Pi during testing.

Configuring Amazon Web Services (AWS)

Here is a diagram that describes the process at a high level.

Creating and configuring an S3 bucket
  1. Create an S3 bucket. In my example, I am calling it rekorobot.
  2. Since this is meant to be an experiment, you could configure the bucket permissions to allow public read. Important: Note that pictures taken by the robot will be publicly accessible if you use this method, so do not store sensitive content. It is recommended to further secure the bucket for more specific read access but this requires some changes beyond the scope of this post. Public write is not required for this project and should NOT be enabled. 
  3. Optional: Create a lifecycle rule to delete all objects in the bucket after one day. This ensures that pictures uploaded by the robot are deleted every day and you don’t incur unnecessary charges for images that are no longer needed.

Configuring access

1. Create an IAM user (say robotuser) with the following AWS Managed Policies attached: AmazonRekognitionFullAccess, AmazonPollyFullAccess. Note the Access Key ID and Secret Access Key. You’re going to need this later.

2. Create a Managed Policy that allows the user access to put and get objects on the rekorobot bucket you just created. [documentation]

2. On the Raspberry Pi, configure the AWS CLI using the aws configure command. Enter the Access Key ID and the Secret Access Key for the robotuser IAM user. 

Configuring Amazon Rekognition

There isn’t much to configure here. Once you’ve allowed access to Amazon Rekognition for robotuser, your code can call the Amazon Rekognition APIs right away using the boto3 SDK!

Configuring Amazon Polly

There isn’t much to configure here. Once you’ve allowed access to Amazon Polly for robotuser, your code can call the Amazon Rekognition APIs right away!

However, you’ll need Polly to generate a speech file for you to copy to the robot so it can say something like “I am currently not able to identify any objects” in case nothing was “seen” or recognized. To do this, simply type the text and choose Download MP3. You can customize the voice. I chose Salli.

Save this file to the /webbot folder on your Raspberry Pi. Name the file as notfound_Salli.mp3

Follow the same steps to create another file named robotready_Salli.mp3 that can play a message like “Hello. Robot is now ready” when the robot starts up. Copy this file to the /webbot folder.

Writing and modifying code

You will need to modify the index.html file in the /webbot/public folder for cosmetic changes that add the additional functionality we need. Take a look at the modified index.html below:

Here’s how the modified interface looks like, when accessed from my phone:

Modified Webbot Interface

Next, you’ll need to modify the webbot.py code using your favorite text editor so that it does the extra bits:

1. Take a picture programmatically using the Pi Camera and the raspistill utility when a button is clicked on in the web interface provided by webbot.

2. Upload the image from the camera to S3.

3. Have the image analyzed by Amazon Rekognition and obtain the response.

4. Send the response text to Amazon Polly and obtain the speech response.

5. Play out the speech through the speaker using pygame. 

6. Some basic error handling.

I like to use WinSCP and NotePad++ on my PC for editing code on the Raspberry Pi. Below is the modified code. I have provided commented code so things are clear.

See Part 2 – Remotely control this robot using a chatbot, serverless compute and IoT!

Bonus content

In case you’re wondering what “the view from the robot” was like when I recorded the first video above, here they are:

I hope you find this helpful. If you do, be sure to post a comment below. Have fun!

Don’t forget to read Part 2 – Remotely control this robot using a chatbot, serverless compute and IoT