collapse

Welcome!



Welcome to Robot Rebels, an online robot maker community.

Register today to post your projects, ask questions, share knowledge and meet like-minded people from around the world.


The RobotRebel.org Community

Author Topic: Google Vision API using Raspberry Pi and Node  (Read 177 times)

Ladvien

  • Alabtu-ian Refugee
  • Member
  • *
  • Posts: 57
Google Vision API using Raspberry Pi and Node
« on: April 20, 2018, 02:38:12 PM »
Google Vision API using Raspberry Pi and Node

This is a jumpstart guide to connecting a Raspberry Pi Zero W to the Google Vision API.

1. Get an Account
Sadly, Google Vision API is not a completely free service.  At the time of writing an API account provides 1000 free Google Vision API calls a month.  Then, it's a $1.00 for each 1000 calls.

I know, I know, not too bad.  But this isn't a commercial project.  I'm wanting to use it for a puttering little house bot.  If my wife gets a bill for $40 because I decided to stream images to the API, well, it'll be a dead bot. Anyway, I thought I'd still explore the service for poo-and-giggles.

To get an account visit

* Google Console

And sign-in with an existing Google account or create one.

2. Enter Billing Information
Now, here's the scary part, you've must enter your billing information before getting going.  Remember, you will be charged if you go over 1000 calls.



Again, if you exceed your 1,000 free calls you will be charged. (What? I said that already? Oh.)

2. Enable Cloud Vision API
After setting up billing information we still need to enable the Cloud Vision API.  This is a security feature, essentially, all Google APIs are disabled by default so if someone accidentally gets access they don't unleash hell everywhere.




Now search for [code single]Vision[/code] and click the button.  Here there should be a glaring [code single]Enable[/code] button.  Press it.




The last thing we need to do is get the API key.  This needs to be included in the API call headers for authentication.

Do not let anyone get your API key. And do not hardcode it in your code.  Trust me, this will bite you.  If this accidentally gets pushed onto the web, a web crawler will find it quickly and you will be paying bajillions of dollars.

Let this article scare you a bit.

* Dev Puts AWS Keys on Github

Let's go get your API Key.  Find the [code single]Credentials[/code] section



You probably wont see any credentials created, as you've probably have not created any yet.

Let's create a new API Key.


I'd name the key something meaningful and limit it to only the Google Cloud API.



Go ahead and copy your API key, as we will need it in the next step.

3. Raspberry Pi Side Setup
The articles listed at the top of this one will help you setup the Raspberry Pi for this step.  But if you are doing things different, most of this should still work for you.  However, when we get to the part of about environment variables, that'll be different for other Linux flavors.

Start by SSH'ing into your Pi.

And update all packages
Code: [Select]
sudo pacman -SyuWe're going to create an environment variable for the Google Cloud Vision API.  This is to avoid hardcoding your API key into the code further down.  _That will work_, but I highly recommend you stick with me and setup an environment variable manager to handle the API.

Switch to the root user by typing
Code: [Select]
suEnter your password.

The next thing we do is add your Google Vision API Key as an environment variable to the [code single]/etc/profile[/code] file, this should cause it to be intialized at boot.

Type, replacing [code single]YOUR_API_KEY[/code] with your actual API Key.
Code: [Select]
echo 'export GOOGLE_CLOUD_VISION_API_KEY=YOUR_API_KEY' >> /etc/profileNow reboot the Pi so that takes effect.

Code: [Select]
sudo rebootLog back in.  Let's check to make sure it's loading the API key.
Code: [Select]
echo $GOOGLE_CLOUD_VISION_API_KEYIf your API key is echoed back, you should be good to go.

4. Project Setup

Let's create a project directory.

Code: [Select]
mkdir google-vis
cd google-vis
Now let's initialize a new Node project.
Code: [Select]
npm initFeel free to customize the package details if you like.  If you're lazy like me, hit enter until you are back to the command prompt.

Let's add the needed Node libraries.  It's one.  The axios library, which enables async web requests.

Code: [Select]
npm axios
Also, let's create a resource directory and download our lovely test image.  Ah, miss Hepburn!

Make sure you are in the [code single]google-vis/resources[/code] project directory when downloading the image.
Code: [Select]
mkdir resources
cd resources
wget https://ladvien.com/images/hepburn.png

5. NodeJS Code

Create a file in the [code single]go-vis[/code] directory called [code single]app.js[/code]

Code: [Select]
nano app.jsThen paste in the code below and save the file by typing CTRL+O and exiting using CTRL+X.

Code: [Select]
// https://console.cloud.google.com/
const axios = require('axios');
const fs = require('fs');

const API_KEY = process.env.GOOGLE_CLOUD_VISION_API_KEY

if (!API_KEY) {
  console.log('No API key provided')
}

function base64_encode(file) {
    // read binary data
    var bitmap = fs.readFileSync(file);
    // convert binary data to base64 encoded string
    return new Buffer(bitmap).toString('base64');
}
var base64str = base64_encode('./resources/audrey.jpg');

const apiCall = `https://vision.googleapis.com/v1/images:annotate?key=${API_KEY}`;

const reqObj = {
    requests:[
        {
          "image":{
            "content": base64str
          },
          "features":[
                {
                    "type":"LABEL_DETECTION",
                    "maxResults":5
                },
                {
                    "type":"FACE_DETECTION",
                    "maxResults":5           
                },
                {
                    "type": "IMAGE_PROPERTIES",
                    "maxResults":5
                }
            ]
        }
      ]
}

axios.post(apiCall, reqObj).then((response) => {
    console.log(response);
    console.log(JSON.stringify(response.data.responses, undefined, 4));
}).catch((e) => {
    console.log(e.response);
});

This code grabs the API key environment variable and creates a program constant from it.

Code: [Select]
const API_KEY = process.env.GOOGLE_CLOUD_VISION_API_KEYThis is how we avoid hardcoding the API key.

6. Run
Let's run the program.

Code: [Select]
node app.jsIf all went well you should get similar output to below

Code: [Select]
data: { responses: [ [Object] ] } }
[
    {
        "labelAnnotations": [
            {
                "mid": "/m/03q69",
                "description": "hair",
                "score": 0.9775374,
                "topicality": 0.9775374
            },
            {
                "mid": "/m/027n3_",
                "description": "eyebrow",
                "score": 0.90340185,
                "topicality": 0.90340185
            },
            {
                "mid": "/m/01ntw3",
                "description": "human hair color",
                "score": 0.8986981,
                "topicality": 0.8986981
            },
            {
                "mid": "/m/0ds4x",
                "description": "hairstyle",
                "score": 0.8985265,
                "topicality": 0.8985265
            },
            {
                "mid": "/m/01f43",
                "description": "beauty",
                "score": 0.87356544,
                "topicality": 0.87356544
            }
        ],
  ....
]

7. And so much more...
This article is short--a jump start.  However, there is lots of potential here.  For example, sending your own images using the Raspberry Pi Camera

raspicam
pi-camera

Please feel free to ask any questions regarding how to use the output.

There are other feature detection requests.

Google Vision API -- Other Features

However, I'm going to end the article and move on to rolling my on vision detection systems.  As soon as I figure out stochastic gradient descent.

 

* Search


* Recent Topics

MKS Gen L 1.0 by KingBeetle
[Today at 04:05:59 AM]


Greetings! by KingBeetle
[Today at 01:03:51 AM]


All metal Titan Extruder by jinx
[May 24, 2018, 05:11:32 PM]


Hello Robot Rebels by jinx
[May 23, 2018, 04:40:15 PM]


Cardboard Bob by Pouserz
[May 23, 2018, 07:04:41 AM]


ChuckCrunch by Gareth
[May 23, 2018, 02:51:44 AM]


Sloth Robot=Bob by erco
[May 16, 2018, 02:17:07 PM]


URF to ERF by jinx
[May 07, 2018, 03:48:47 AM]


How pull-up resistors really work by maelh
[May 01, 2018, 03:39:13 PM]


Animabot - Advanced hexapod robot by Nemesis
[May 01, 2018, 05:53:03 AM]


Antique (fun) stuff by MEgg
[April 24, 2018, 01:48:57 PM]


Spider by viswesh
[April 21, 2018, 09:41:33 AM]


Google Vision API using Raspberry Pi and Node by Ladvien
[April 20, 2018, 02:38:12 PM]


Hello from HITBOT - a young robot arm team by tinhead
[April 20, 2018, 11:32:57 AM]


1B1 -- RAN Stack by Ladvien
[April 14, 2018, 10:40:54 AM]

* Recent Posts

Re: MKS Gen L 1.0 by KingBeetle
[Today at 04:05:59 AM]


Re: MKS Gen L 1.0 by jinx
[Today at 02:55:30 AM]


Re: MKS Gen L 1.0 by KingBeetle
[Today at 01:26:03 AM]


Greetings! by KingBeetle
[Today at 01:03:51 AM]


All metal Titan Extruder by jinx
[May 24, 2018, 05:11:32 PM]


Re: Hello Robot Rebels by jinx
[May 23, 2018, 04:40:15 PM]


Re: Cardboard Bob by Pouserz
[May 23, 2018, 07:04:41 AM]


Re: Hello Robot Rebels by Pouserz
[May 23, 2018, 07:02:53 AM]


Yo° ChuckCrunch by Gareth
[May 23, 2018, 02:51:44 AM]


Sloth Robot=Bob by erco
[May 16, 2018, 02:17:07 PM]


Re: ChuckCrunch by jinx
[May 07, 2018, 03:49:42 AM]


Re: URF to ERF by jinx
[May 07, 2018, 03:48:47 AM]


Re: URF to ERF by Nedofenaz
[May 07, 2018, 03:41:25 AM]


Re: ChuckCrunch by Nedofenaz
[May 07, 2018, 03:35:27 AM]


How pull-up resistors really work by maelh
[May 01, 2018, 03:39:13 PM]