Wern Ancheta

Adventures in Web Development.

Working With Youtube Data API in PHP

| Comments

Decades ago I got this project where I needed to work with the Youtube API to get the details of videos uploaded by a specific channel. And then create something like a mini-youtube website out of it. Just kidding about the decades part, it was probably 4-6 months ago. Anyway its only this time that I got the time to actually write about it. So here it goes.

Getting API Credentials

First you need to get the API credentials from your Google Console. There’s only a single API credential for all of the APIs that Google offers. So you might already have one. If you do then all you have to do is enable the API in your Google Console page. Currently you would see something like this when you go to APIs & Auth and then click on APIs on your Google Console:

google apis

What we need is the Youtube Data API v3. Click that and enable it. If you do not have an API credential then you can click on ‘Credentials’ under the APIs & Auth and click on ‘Create new Key’ under the Public API Access section. Choose Server Key as the key type since were working primarily on the server. Don’t take my word for it though. Based on my experience sometimes this doesn’t work and you actually need to select Browser Key. I just hope google has fixed this already. Server keys are only supposed to be used in the server and browser keys on the client side. Clicking on either browser key or server key will generate an API Key for you. This is the key that you will use when you need to talk to the Youtube API.

Dependencies

As we are primarily going to be requesting data from another server, we will need curl. If you don’t have it yet, install it on your system. Here’s how you install it on Ubuntu:

1
2
3
sudo apt-get install curl
sudo apt-get update
sudo apt-get install libcurl3 php5-curl

If you’re using another Operating System then feel free to ask Google.

Playing with the API

To make things easier we need a library that will do most of the heavy-lifting for us. Things like signing the request, constructing it and actually making the request to the server. Because were lazy folks we don’t need to do that every time we need to talk to an API. Thankfully an awesome guy in the alias of madcoda has already done that work for us. If you already have composer installed, simply execute the following command inside your project directory:

1
composer require madcoda/php-youtube-api

This will install the library into your vendor directory, autoload it and add it to your composer.json file.

Once its done you can now use the library by including the autoload.php file under the vendor directory and then use the Madcoda\Youtube namespace.

1
2
3
4
5
<?php
require 'vendor/autoload.php';

use Madcoda\Youtube;
?>

Next create a new instance of the Youtube class and pass in the API Key that you acquired earlier as the key item in an array.

1
2
3
<?php
$youtube = new Youtube(array('key' => 'YOUR_API_KEY'));
?>

Searching

With this library you can search for videos, playlists and channels by using the search method. This method takes up your query as its argument. For example you want to find ‘Awesome’:

1
2
3
<?php
$results = $youtube->search('Awesome');
?>

This will return something similar to the following if you use print_r on the $results:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Array
(
[0] => stdClass Object
    (
        [kind] => youtube#searchResult
        [etag] => "tbWC5XrSXxe1WOAx6MK9z4hHSU8/xBkrpubrM2M6Xi88aNBfaVJV6gE"
        [id] => stdClass Object
            (
                [kind] => youtube#video
                [videoId] => qmTDT92VIRc
            )

        [snippet] => stdClass Object
            (
                [publishedAt] => 2015-01-23T23:03:31.000Z
                [channelId] => UCZpKcVBccIjO9n0RXx3ZGFg
                [title] => PEOPLE ARE AWESOME 2015 (UNBELIEVABLE)
                [description] => People are Awesome 2015 unbelievable talent and natural skills! Subscribe to NcCrullex for more people are awesome videos. Chris Samba Art Channel: ...
                [thumbnails] => stdClass Object
                    (
                        [default] => stdClass Object
                            (
                                [url] => https://i.ytimg.com/vi/qmTDT92VIRc/default.jpg
                            )

                        [medium] => stdClass Object
                            (
                                [url] => https://i.ytimg.com/vi/qmTDT92VIRc/mqdefault.jpg
                            )

                        [high] => stdClass Object
                            (
                                [url] => https://i.ytimg.com/vi/qmTDT92VIRc/hqdefault.jpg
                            )

                    )

                [channelTitle] => NcCrulleX
                [liveBroadcastContent] => none
            )

    )

As you can see most of the data that you might want is stored in the snippet item. Things like the title, description and URL to the thumbnails.

You might ask, how you would know if the item is a video, playlist or channel? You might have already noticed based on the results above. Its located under the id –> kind. It would have a kind of youtube#video if its a video. youtube#channel if its a channel and youtube#playlist if its a playlist. Don’t believe me? Try using the API to search for ‘the new boston’ and you’ll see.

If you only want to search for videos then you can use the searchVideos method. Just like the search method this takes up your query as its argument:

1
2
3
<?php
$results = $youtube->searchVideos('Ninja');
?>

If you only want to get videos from a specific channel, you can do it in 2 calls. First you need to get the channel id by using the getChannelByName method and then extract the id from the result that you get and then use the id for the searchChannelVideos to search for videos in a specific channel:

1
2
3
4
<?php
$channel = $youtube->getChannelByName('thenewboston');
$results = $youtube->searchChannelVideos('ruby', $channel->id);
?>

The code above would return the first page of results for the ‘ruby’ videos in ‘thenewboston’ channel.

If you only want to return playlists on a specific channel, you can do:

1
2
3
4
<?php
$channel = $youtube->getChannelByName('thenewboston');
$results = $youtube->getPlaylistsByChannelId($channel->id);
?>

If you want to get the items in a playlist, you can do it in 3 calls:

1
2
3
4
5
<?php
$channel = $youtube->getChannelByName('thenewboston');
$playlists = $youtube->getPlaylistsByChannelId($channel->id);
$playlist_items = $youtube->getPlaylistItemsByPlaylistId($playlists[0]->id);
?>

If you want to be more liberal with your search, you can use the searchAdvanced method:

1
2
3
4
5
6
7
<?php
$results = $youtube->searchAdvanced(array(
    'q' => 'fruits',
    'part' => 'snippet',
    'order' => 'rating'
));
?>

Here’s a breakdown of the parameters we’ve just used:

  • q – your query
  • part – the part of the result which you want to get. Earlier in the sample result we saw that there are only 2 parts. id and snippet. This parameter allows you to specify that. If you only need the video, playlist or channel id then supply id as the part. If you need the full details then use snippet. If you need both then you can use a comma-separated list: id, snippet.
  • order – the basis of the ordering. In the example we used rating. This orders the results based on the highest ratings to the lowest. Not really sure what the ratings is. But the first thing that comes to mind is the number of likes in the video. You can also use viewCount if you want. This will order the results with the videos, playlists or channels which has the highest number of views to the lowest.
  • type – the type of item. This can either be video, playlist, or channel.

There’s a whole bunch more which you can specify as a parameter. Be sure to check out the search reference.

Pagination

You can also paginate results. First you need to make an initial request so you can get the nextPageToken. Then check if the page token exists, if it does then add a pageToken item to the parameters that you supplied earlier. And then make another request. Since we supplied the nextPageToken, this will now navigate to the second page of the same result set. By default the youtube data api only returns 10 rows per request. This means that the second page will show you row 11 up to 21.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<?php
$params = array(
    'q' => 'Ruby',
    'type' => 'video',
    'part' => 'id, snippet',
    'maxResults' => 100
);

$search = $youtube->searchAdvanced($params, true);

//check for a page token
if(isset($search['info']['nextPageToken'])){
    $params['pageToken'] = $search['info']['nextPageToken'];
}

//make another request with the page token added
$search = $youtube->searchAdvanced($params, true);

//do something with the search results her
?>         

You can also use the paginateResults method to implement pagination. Just like the method above, we need to make an initial request to get the nextPageToken. We then store it to an array so we can navigate through the results easily. The paginateResults method takes up the original search parameters as its first argument and the page token as its second. So all you have to do is supply the nextPageToken that you got from the previous results as the second argument for the paginateResults method to navigate to the next page. Note that in the example below, the indexes for the $page_tokens are just hard-coded. You will have to implement the generation of pagination links yourself and then use their index when navigating through the results. Also note that the results aren’t cached, this means that whenever you paginate through the results a new request is made to the youtube data api. You will also need to implement caching if you don’t want to easily run out of requests you can make.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
<?php
//your search parameters
$params = array(
    'q' => 'Python',
    'type' => 'video',
    'part' => 'id, snippet',
    'maxResults' => 100
);

//array for storing page tokens
$page_tokens = array();

//make initial request
$search = $youtube->paginateResults($params, null);

//store page token
$page_tokens[] = $search['info']['nextPageToken'];

//navigate to the next page
$search = $youtube->paginateResults($params, $page_tokens[0]);

//store page token token
$page_tokens[] = $search['info']['nextPageToken'];

//navigate to the next page
$search = $youtube->paginateResults($params, $page_tokens[1]);

//store page token
$page_tokens[] = $search['info']['nextPageToken'];

//navigate to the previous page
$search = $youtube->paginateResults($params, $page_tokens[0]);

//do something with the search results here
?>

Conclusion

That’s it! In this tutorial you’ve learned how to work with the Youtube Data API in PHP. You’ve learned how to get the info of a specific video, get general details about videos in a specific channel, get the videos in a specific playlist, and also search for videos, playlists and channels using a query. Don’t forget to work through the API request limits though. The limit information can be found on the Youtube Data API page on your Google Console.

Resources

Creating a Chrome Extension

| Comments

In this tutorial I’ll be showing you how to create a very basic chrome extension. One that would allow us to schedule posts with the Ahead project that I created. Here’s how it will work:

  1. User clicks on the extension on a page that he wants to share on a future time.
  2. The extension makes a request to the server where Ahead is currently hosted.
  3. The server returns a response and it is then outputted by the extension.

Creating the Extension

Before anything else we need to create the manifest.json file. This is the most important file since chrome won’t be able to recognize our extension if we do not have this file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{
  "manifest_version": 2,
  "name": "Ahead",
  "version": "1.0",
  "description": "Easily schedule posts",

  "browser_action": {
    "default_icon": "icon.png"
  },

  "background": {
    "scripts": ["background.js"]
  },

  "content_scripts": 
    [
        {
            "matches":["<all_urls>"],
            "js":["content.js"],
            "run_at": "document_end"
        }
    ],
  
  "permissions": ["<all_urls>", "storage"],
  "options_page": "options.html"
}

Breaking it down:

  • manifest_version – this is the version of the manifest file. The Chrome browser has been around for quite a while now. So are the extensions that have been written when it first came out. Currently the latest version that we can assign to a manifest file is 2.

  • name – the name you want to give to the extension.

  • version – the version of the extension.
  • description – a descriptive text you want to show your users. This is the text that will show right under the name of the extension when the user accesses the chrome://extensions page.
  • browser_action – used to specify the element which will trigger the extension. In this case we want an icon to be the trigger so we set the default_icon. The value would be the filename of the icon.
  • content_scripts – these are the scripts that run in the context of the current web page. The matches property is where you specify an array of URL’s where the content scripts can run. In this case we just set a special value called "<all urls>". This way the script can run from any webpage. Next is the js property where we specify an array of items containing the path to the content scripts. Last is the run_at property where we specify when to run the content scripts. We just set it to document_end so we can make sure that the whole page is loaded before we execute our script.
  • background – used to specify the background scripts. Content scripts only has access to the elements in the current page but not the Chrome API methods. So we need a background script in order to access those methods. This property simply takes up a single property called scripts where you specify an array of the background scripts you wish to use. In thise case were just going to use a single background.js file.
  • permissions – this is where we can specify an array containing the list of items that the extension needs to use or has access in. In this case were just going to use "<all_urls>" and storage. We use storage to have access to the methods used for saving custom settings for the extension. In our case the setting would be the api key required by Ahead.
  • options_page – used for specifying which HTML file will be used for the options page.

Next let’s proceed with the options page:

1
2
3
4
5
6
7
8
9
10
11
12
13
<!DOCTYPE html>
<html>
<head><title>Ahead</title></head>
<body>

    API Key:
    <input type="text" id="api_key">

    <button id="save">Save</button>

    <script src="options.js"></script>
</body>
</html>

You can use css just like you would in a normal HTML page if you want. But for this tutorial we won’t. The options page is pretty minimal. All we need is the actual field, a button to save the settings and then a link to the options page JavaScript file.

Here’s the options.js file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
function save_options(){
  var api_key = document.getElementById('api_key').value;

  chrome.storage.sync.set({
    'api_key': api_key
  },
  function(){
    alert('API Key Saved!');
  });
}


function restore_options(){

  chrome.storage.sync.get({
    'api_key': ''
  },
  function(items){
    document.getElementById('api_key').value = items.api_key;
  });
}
document.addEventListener('DOMContentLoaded', restore_options);
document.getElementById('save').addEventListener('click',
    save_options);

In the above file we declared 2 methods. save_options and restore_options. save_options is used for saving the settings to chrome storage. And restore_options is for retrieving the settings from the storage and populating the value for each of the fields. In the options.js file we got access to the chrome storage API. The main methods that were using are the sync.set and sync.get. We use sync.set to save the settings in the chrome storage and then output an alert box saying the settings are saved when its successful. sync.get on the other hand is used for retrieving the existing setting from chrome storage and then we use the retrieved value to populate the text field. The save_options method is called when the save button is clicked. And the restore_options method is called when the DOM of the options page has been fully loaded.

Next is the background.js file. We primarily use this file to listen for the click event on the browser_action which is basically the icon of extension that is located on the upper right corner of Chrome:

1
2
3
4
5
6
7
chrome.browserAction.onClicked.addListener(function(tab){

  chrome.tabs.query({active: true, currentWindow: true}, function(tabs) {
    var activeTab = tabs[0];
    chrome.tabs.sendMessage(activeTab.id, {"message": "clicked_browser_action"});
  });
});

You don’t need to worry about the code above too much. All it does is listen for the click event on the icon of the extension. It then uses the tabs.sendMessage method to send a message to the current tab that hey the icon extension has been clicked. This then brings us to the content.js file which basically just waits for this message to be sent. Once it receives the message we then retrieve the api key using the sync.get method. Once we retrieved the api key we make a POST request to the Ahead URL which is responsible for accepting POST requests for posts to be published. The content would be the title of the current page and then its URL. We then construct a new form data and supply the queue, api_key and content as the fields. We set the queue to true because we want to schedule the post to be published later. If you set it to false then it will be published immediately. Next is the api_key. We simply supply what we got from chrome storage as the value. And last is the content. We then send this form data to the Ahead URL. Finally we listen for the onload event on the request. This event is fired up whenever the request is successful. All we have to do is parse the response since its a JSON string. We then alert the value for the text property. Which is basically just a message saying that the post was scheduled and when it will be published. If we do get an error, the onerror event is fired and we simply tell the user to try again by using an alert.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
chrome.runtime.onMessage.addListener(
  function(request, sender, sendResponse){

    chrome.storage.sync.get({
        'api_key': ''
    },
    function(items){
        var api_key = items.api_key;

        var http_request = new XMLHttpRequest();
        http_request.open('POST', 'http://ec2-54-68-251-216.us-west-2.compute.amazonaws.com/api/post', true);
        var content = document.title + ' ' + window.location.href;
        var form_data = new FormData();
        form_data.append('queue', true);
        form_data.append('api_key', api_key);
        form_data.append('content', content);
        http_request.send(form_data);

        http_request.onload = function(){
            if(http_request.status >= 200 && http_request.status < 400){
              var response_data = JSON.parse(http_request.responseText);
              alert(response_data.text);
            }
        };


        http_request.onerror = function() {
            alert('Something went wrong while trying to post. Please try again');
        };
    });


  }
);

Installing the Extension

Now were ready to actually install the extension. You can do that by enabling developer mode on the chrome extensions page:

1
chrome://extensions/

This will show you 3 new buttons: load unpacked extension, pack extension and update extensions now. All we need is the first one. Click on it then select the folder that contains the manifest.json file on its root directory. Chrome will then list it as one of the available extensions:

extensions

Once its loaded, click on the ‘options’ link to access the options page. From there add the api key which you can get from the Ahead website.

At this point all of the new tabs that you open or the existing tabs which you reload would be useable with the extension. Just click on the extension icon and it will schedule a post using the title of the page and its URL as the content.

Conclusion

That’s it! In this tutorial you’ve learned the basics of how to create a chrome extension. You’ve learned how to listen for the click event on the extension icon, how to add an options page and how to get the details from the current page.

Getting Started With Lumen

| Comments

In this tutorial I’ll walk you through Lumen, a lightweight framework from the same guys that made Laravel. Lumen is basically a lighter version of Laravel.

Installation

You can install Lumen by using composer’s create-project command. Simply execute the following command on your preferred install directory:

1
composer create-project laravel/lumen --prefer-dist

Once the installation is done, you can navigate to the lumen directory and execute the following:

1
php artisan serve --port=7771

This will serve the project on port 7771 of your localhost:

1
http://localhost:7771/

If the installation completed successfully, you will be greeted by the default screen.

Using Third Party Libraries

You can use third party libraries with Lumen by adding the package that you want to install in the composer.json file. Here’s an example:

1
2
3
4
5
6
"require": {
    "laravel/lumen-framework": "5.0.*",
    "vlucas/phpdotenv": "~1.0",
    "elasticsearch/elasticsearch": "~1.0",
    "guzzlehttp/guzzle": "~5.0"
},

Note that the lumen-framework and phpdotenv is there by default since those are needed in order for Lumen to work. In the above file we have added elasticsearch and guzzlehttp as our dependencies.

You can then make Lumen aware of these libraries by initializing them on the files where you want to use them:

1
2
3
4
<?php
$http_client = new \GuzzleHttp\Client();
$es_client = new \Elasticsearch\Client();
?>

Configuration

By default Lumen is pretty bare-bones. Which means that we need to do some configuration if we want to use some of the features that we usually have in Laravel. In Lumen you can enable most of those functionalities by editing the bootstrap/app.php file.

Enabling Sessions

You can enable sessions by removing the comment on the middleware which says Illuminate\Session\Middleware\StartSession:

1
2
3
4
5
6
7
8
9
<?php
$app->middleware([
    //'Illuminate\Cookie\Middleware\EncryptCookies',
    //'Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse',
    'Illuminate\Session\Middleware\StartSession',
    //'Illuminate\View\Middleware\ShareErrorsFromSession',
    //'Laravel\Lumen\Http\Middleware\VerifyCsrfToken',
]);
?>

Enabling Eloquent

If you need to use Eloquent in your app, you can enable it by removing the comment on the following lines:

1
2
3
4
<?php
$app->withFacades();
$app->withEloquent();
?>

Dot Env

Lumen uses a .env file to set the environment configuration for the project. This way you can have a different .env file on your local machine and on your server. And then you can set git to ignore this file so that it doesn’t get pushed along to the server when you deploy your changes. Here’s how the .env file looks by default:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
APP_ENV=local
APP_DEBUG=false
APP_KEY=SomeRandomKey!!!

APP_LOCALE=en
APP_FALLBACK_LOCALE=en

DB_CONNECTION=mysql
DB_HOST=localhost
DB_DATABASE=homestead
DB_USERNAME=homestead
DB_PASSWORD=secret

CACHE_DRIVER=memcached
SESSION_DRIVER=memcached
QUEUE_DRIVER=database

As you can see from the file above, you can set the name of the environment by setting the value for APP_ENV. The next one right after that is the APP_DEBUG configuration which is set to false by default. If you’re developing you need to set this to true so you have an idea what’s wrong when testing your app. Next is APP_KEY which is basically used as a salt for sessions. You can use a random string generator for this. APP_LOCALE and APP_FALLBACK_LOCALE are used for setting the language of your app. This is set to english by default. Next are the database configuration. Anything which starts with DB_ is the database configuration. By default its expecting to connect to a mysql database. DB_HOST is the host in which the database is running. DB_DATABASE is the name of the database you want to connect to. DB_USERNAME is the username of the user you want to use for logging in. DB_PASSWORD is the password of the user. After the database configuration are the cache, session and queue driver configuration. The cache and session driver are using memcached by default so you’ll have to install memcached if you’re using caching and session functionalities. If memcached is not present in the system then it will just fallback to the default one which is the filesystem.

Note that before you can use the .env file, you need to uncomment the following line in your bootstrap/app.php file. This way Lumen will load the .env file on the root of your project.

1
Dotenv::load(__DIR__.'/../');

Directory Structure

Here’s what the default directory structure looks like in Lumen. The one’s with * are files:

1
2
3
4
5
6
7
8
9
10
11
app
bootstrap
database
public
resources
storage
tests
vendor
*artisan
*server.php
*composer.json

The app directory is where you will usually work with. This is where the routes, controllers and middlewares are stored.

The bootstrap directory only contains one file by default, the app.php file. As you have seen earlier, its where you can configure and add new functionality to Lumen.

The database directory is where the database migrations and seeders are stored. You use migrations to easily jump from previous database version to another. Its like version control for your database. Seeds on the other hand are used to populate the database with dummy data so that you can easily test your app without having to enter the information through the app itself.

The public directory is where your public assets are stored. Things like css, javascript and images are stored in this directory.

The resources directory is where you store the views that you use for your app.

The storage directory is where logs, sessions and cache files are stored.

The tests directory is where you put your test files.

The vendor directory is where the dependencies of your app is stored. This is where composer installs the packages that you specified in your composer.json file.

The artisan file is the file that is used for command line tasks for your project. We have used it earlier when we served the project. The artisan file can also be used to create migrations, seeds and other tasks that you usually perform through the command line.

The server.php file is used for serving the files without the use of a web server like Apache.

Routes

Routes are stored in the app/Http/routes.php file. Here’s how you would declare a route in Lumen:

1
2
3
4
5
<?php
$app->get('/', functionn(){
    return 'Hello World!';
});
?>

If you want to use a controller method to handle the response for a specific route then you can do something like this:

1
2
3
<?php
$app->get('/', 'App\Http\Controllers\HomeController@index');
?>

Then you would need to create a HomeController controller and then declare an index method. This will then be used to return a response.

Controllers

Controllers are stored in the app/Http/Controllers directory. Needless to say, the convention is one file per controller. Otherwise it would be really confusing. Here’s the basic structure of a controller:

1
2
3
4
5
6
7
8
9
10
<?php
<?php namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Laravel\Lumen\Routing\Controller as BaseController;

class HomeController extends BaseController{

}
?>

Note that we need to use Illuminate\Http\Request to be able to access the request parameters for each request. We also need to use Laravel\Lumen\Routing\Controller. This allows us to extend the functionality of the base controller class.

Views

Lumen still comes with blade templating, all you have to do is create your views under the resources/views directory and then use .blade.php as the file extension. Though unlike Laravel you return views this way:

1
2
3
4
5
<?php
public function index(){
    return view('index');
}
?>

In the example above were returning the index view that is stored in the root of the resources/views directory. If we want to return some data, then we can pass it by supplying the array or object that we want to pass:

1
2
3
4
5
6
7
8
<?php
$array = array(
    'name' => 'Ash Ketchum',
    'pokemon' => 'Pikachu'
);

return view('index', $array);
?>

It can then be rendered in the view like so:

1
2
3
4
5
6
7
8
9
10
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>test</title>
</head>
<body>
    Hi my name is , my Pokemon is 
</body>
</html>

Database

When working with a database you first need to edit the database configuration values in your .env file.

Migrations

Once that’s done you can try if your app can connect to your database by creating a database migration. You can do that by executing the following command in the root directory of your project:

1
php artisan migrate:install

The command above creates the migration table in your database. The migration table is used by Lumen to keep track of which database migrations are currently applied to your database. If that worked without problem and you see that a migrations table has been created in your database then you’re good to go.

Next you can create a new table by using the make:migration command. This takes up the action that you wish to do. In this case we want to create a new table so we use --create and then supply the name of the table as the value. The second argument will be the name that will be assigned to the migration class.

1
php artisan make:migration --create=users create_users_table

The command above will create a file which looks like the following in the database/migrations directory:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
<?php

use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;

class CreateUsersTable extends Migration {

    /**
     * Run the migrations.
     *
     * @return void
     */
    public function up()
    {
        Schema::create('users', function(Blueprint $table)
        {
            $table->increments('id');
            $table->timestamps();
        });
    }

    /**
     * Reverse the migrations.
     *
     * @return void
     */
    public function down()
    {
        Schema::drop('users');
    }

}
?>

The only thing that we need to edit here are the method calls inside the up method:

1
2
3
4
5
6
7
8
<?php
Schema::create('users', function(Blueprint $table)
{
    $table->increments('id');
    $table->string('name');
    $table->integer('age');
});
?>

That is where we specify the fields that we need to add to the users table.

Once you’re happy with the file, save it and then run:

1
php artisan migrate

This will create the table in your database and add a new row to the migrations table.

Seeds

You can create a new database seeder file inside the database/seeds directory. Here’s the usual structure of a seeder:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<?php

use Illuminate\Database\Seeder;

class UserTableSeeder extends Seeder
{
    public function run()
    {

        //seeding code       

    }
}
?>

Inside the run method is the actual seeding code. We can use your usual Laravel flavored database queries inside of it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<?php
DB::table('users')->insert(
    array(
        'name' => 'Ash Ketchum',
        'age' => 10
    )
);

 DB::table('users')->insert(
    array(
        'name' => 'Brock',
        'age' => 15
    )
);

DB::table('users')->insert(
    array(
        'name' => 'Misty',
        'age' => 12
    )
);
?>

Once that’s done, save the file and open up the DatabaseSeeder.php file. This is where you specify which seeders you want to execute whenever you execute the php artisan db:seed command. In this case we want to add the UserTableSeeder:

1
$this->call('UserTableSeeder');

Before we execute the php artisan db:seed command we will first need to reload the autoloaded files by executing the composer dump-autoload command. We need to do this every time we add a new seeder so that Lumen will take care of loading the seeder.

Getting Data

From your routes file you can now try fetching the users that we’ve added:

1
2
3
4
5
6
7
<?php
$app->get('/db-testing', function(){

    $users = DB::table('users')->get();
    return $users;
});
?>

With Lumen you can use the query builder, basic queries and even Eloquent. So if you already know how to work with those then you’re good to go.

Conclusion

That’s it! In this tutorial I’ve walked you through Lumen and how you can install, configure and work with the different functionalities that it can offer.

Implementing Audio Calls With PeerJS

| Comments

These past few days I’ve been playing around with WebRTC. For the uninitiated, WebRTC basically means Web Real Time Communication. Things like chat, audio or video calling comes to mind when you say real time. And that is what really WebRTC is. It gives real time super powers for the web. In this tutorial I’ll be showing you how to implement audio calls with PeerJS. PeerJS is a JavaScript library that allows us to easily implement peer to peer communications with WebRTC.

Things We Need

Before we start, go ahead and download the things we’ll need for this tutorial:

  • jQuery – I know right! who still uses jQuery these days? Raise your left foot. Kidding aside, yes we still need jQuery. In this tutorial we’ll only be using it to handle click events. So if you’re confident with your Vanilla JavaScript-Fu then feel free to skip it.

  • PeerJS – In case you missed it earlier, were gonna need PeerJS so that we can easily implement WebRTC.

  • RecordRTC.js – This library mainly provides recording functionalities (e.g taking screenshots and webcam photos, recording audio and video) but it also doubles as a shim provider. We won’t really use the recording functionalities in this tutorial so were only using it to be able to request the use of the microphone in the device.

Overview of the App

Were going to build an app that would allow 2 users to call each other through the web via WebRTC. This app can use the PeerServer Cloud or you can implement your own PeerJS server. As for the outputting the audio coming from the microphones of each peer, we will use HTML5 Audio. So all we have to do is convert the audio stream to a format that HTML5 Audio can understand so that we can have each of the users listen to the audio coming from the other side.

Building the App

Now that have a basic overview of how the app will work, its time to actually build it.

First, link all the things that we’ll need:

1
2
3
4
5
6
7
8
9
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>test</title>
    <script src="//cdn.peerjs.com/0.3/peer.min.js"></script>
    <script src="//cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
    <script src="//www.WebRTC-Experiment.com/RecordRTC.js"></script>
</head>

Yes, you can also put those script tags right before the closing body tag if performance is your thing.

Next is the HTML that the user will actually see:

1
2
3
<body>
    <button id="start-call">start call</button>
    <audio controls></audio>

Yup! I didn’t miss anything. That’s all we need. A button to start the call to another peer and an HTML5 audio tag to output the audio on the other end.

Now let’s proceed with the JavaScript. First declare a method that will get the query parameters by name.

1
2
3
4
5
6
function getParameterByName(name){
    name = name.replace(/[\[]/, "\\[").replace(/[\]]/, "\\]");
    var regex = new RegExp("[\\?&]" + name + "=([^&#]*)"),
        results = regex.exec(location.search);
    return results === null ? "" : decodeURIComponent(results[1].replace(/\+/g, " "));
}

The way this app works is by using from and to as query parameters. Where from is the id that you want to give to the peer whose currently using the device and to is the id of the peer on the other side. So we use the method above to easily get those values. To emphasize further, here’s how the URL that we will use to access the app will look like on our side (john):

1
http://mysite.com/call-app.html?from=john&to=jane

And on the other side (jane), it would look like this:

1
http://mysite.com/call-app.html?from=jane&to=john

We’ve basically just interchanged the two peers so we know exactly where the request is coming from and where its going to.

Next we declare the method that will ask a permission to the user for the page to use the microphone. This method takes up 2 parameters, the successCallback and the errorCallback. The successCallback is called when the page has been granted permission to use the microphone. And the errorCallback is called when the user declined.

1
2
3
4
5
6
function getAudio(successCallback, errorCallback){
    navigator.getUserMedia({
        audio: true,
        video: false
    }, successCallback, errorCallback);
}

Next declare the method that will be called when a call is received from a peer. This method has the call object as its parameter. We use this call object to initiate an answer to the call. But first we need to ask permission to the user to use the microphone by calling the getAudio method. Once we get the permission, we can then answer the call by calling the answer method in the call object. This method takes up the MediaStream as its argument. If we didn’t get the permission to use the microphone, we just log that an error occurred and then output the actual error. Finally, we listen to the stream event in the call and then we call the onReceiveStream method when the event happens. This stream event can be triggered in 2 ways. First is when a peer initiates a call to another peer. And the second is when the other peer actually answers the call.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
function onReceiveCall(call){

    console.log('peer is calling...');
    console.log(call);

    getAudio(
        function(MediaStream){
            call.answer(MediaStream);
            console.log('answering call started...');
        },
        function(err){
            console.log('an error occured while getting the audio');
            console.log(err);
        }
    );

    call.on('stream', onReceiveStream);
}

Next is the onReceiveStream method. This method is called when a media stream is received from the other peer. This is where we convert the media stream to a URL which we use as the source for the audio tag. The stream is basically an object which contains the current audio data. And we convert it to a URL by using the window.URL.createObjectURL method. Once all the meta data is loaded, we then play the audio.

1
2
3
4
5
6
7
8
function onReceiveStream(stream){
    var audio = document.querySelector('audio');
    audio.src = window.URL.createObjectURL(stream);
    audio.onloadedmetadata = function(e){
        console.log('now playing the audio');
        audio.play();
    }
}

Now that were done with all the method declarations, its time to actually call them. First we need to know where the request is coming from and who will it be sent to.

1
2
var from = getParameterByName('from');
var to = getParameterByName('to');

Next we declare a new peer. This takes up the id of the peer as its first argument and the second argument is an object containing the PeerJS key. If you do not have a key yet, you can register for the PeerJS Cloud Service. Its free for up to 50 concurrent connections. After that, we also need to set the ice server config. This ensures that we can get the peers to connect to each other without having to worry about external IP’s assigned by routers, firewalls, proxies and other kinds of network security which can get in the way. You need to have at least one stun server and one turn server configuration added. You can get a list of freely available stun and turn servers here.

1
2
3
4
5
6
7
8
9
10
var peer = new Peer(
    from,
    {
        key: 'Your PeerJS API Key',
        config: {'iceServers': [
            { url: 'stun:stun1.l.google.com:19302' },
            { url: 'turn:numb.viagenie.ca', credential: 'muazkh', username: 'webrtc@live.com' }
        ]}
    }
);

If you want to use your own server and get through the 50 concurrent connections limit of the PeerServer cloud. You can install PeerJS Server on your existing Express app in node.

1
npm install peer --save

And then use it like so:

1
2
3
4
5
6
7
8
9
10
11
var express = require('express');
var express_peer_server = require('peer').ExpressPeerServer;
var peer_options = {
    debug: true
};

var app = express();

var server = app.listen(3000);

app.use('/peerjs', express_peer_server(server, peer_options));

And from the client side you can now use your shiny new PeerJS server:

1
2
3
4
5
6
7
8
var peer = new Peer(from, {
        host: 'your-peerjs-server.com', port: 3000, path: '/peerjs',
        config: {'iceServers': [
            { url: 'stun:stun1.l.google.com:19302' },
            { url: 'turn:numb.viagenie.ca', credential: 'muazkh', username: 'webrtc@live.com' }
        ]}
    }
);

Next is an optional code. We only use it to determine if the peer we created was actually created. Here we simply listen to the open event on the peer object. And once it happens, we just output the peer id.

1
2
3
peer.on('open', function(id){
    console.log('My peer ID is: ' + id);
});

Next we listen to the call event. This is triggered when a peer tries to make call to the current user.

1
peer.on('call', onReceiveCall);

Finally, here’s the code we use when we initiate the call ourselves:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$('#start-call').click(function(){

    console.log('starting call...');

    getAudio(
        function(MediaStream){

            console.log('now calling ' + to);
            var call = peer.call(to, MediaStream);
            call.on('stream', onReceiveStream);
        },
        function(err){
            console.log('an error occured while getting the audio');
            console.log(err);
        }
    );

});

What this does is listen to the click event on the start-call button. It then calls the getAudio method to ask the user for permission to use the microphone. If the user allows then the call is made to the peer using the call method. This takes up the id of the peer on the other side and the MediaStream. Next, we just listen for the stream event and then call the onReceiveStream method if it happens. Note that this stream would be the audio stream from the peer on the other side and not the audio stream of the current user. Otherwise we would also hear our own voice. The same is true with the stream that were getting from the onReceiveCall method.

Conclusion

That’s it! In this tutorial we’ve learned how to implement audio calls with WebRTC and PeerJS. Be sure to check out the resources below if you want to learn more.

Resources

Getting Started With CouchDB in Node.js

| Comments

In this tutorial I’m going to walk you through how to get started with CouchDB in Node.js. But first here’s some background on what CouchDB is. CouchDB is a NoSQL database from the Apache Foundation. Just like any other NoSQL database out there. It uses JSON to store data and it deals with separate documents instead of tables and fields.

Installing CouchDB

You can install CouchDB by executing the following command:

1
sudo apt-get install couchdb

Once that’s done you can test if its successfully installed by accessing http://localhost:5984/ from your browser. You’re good to go if it returns a response similar to the following:

1
{"couchdb":"Welcome","uuid":"0eb12dd741b22a919c8701dd6dc14087","version":"1.5.0","vendor":{"version":"14.04","name":"Ubuntu"}}

Futon

If you’re from the RDBMS land. You might be familiar with Phpmyadmin. In CouchDB Futon is the equivalent of Phpmyadmin. It allows you to manage your CouchDB databases with ease. In case you’re wondering what Futon means, its basically a Japanese word. Futon is a traditional japanese bedding.

Ok enough with the trivia. You can access Futon by going to http://localhost:5984/_utils/. It should show you something similar to the following:

futon

The first thing you need to do is to configure futon so that it has an admin user. Because by default, every user who has access to it have admin privileges. It can only be accessed from the local computer so this isn’t really a security issue. Not unless a hacker gets to access the server. You can setup an admin by going into the configuration page. Just click ‘Configuration’ under the tools menu to get there. Next click on the ‘setup admin’ link found at the bottom right corner. This should open up a modal that asks you to enter the username and password that you can use for logging in as admin.

Just enter your desired username and password and then click ‘create’ to create the admin. You can now login as an admin by clicking on the ‘login’ link. Non-admin users will only now have read privileges once you have setup your first admin user.

With Futon you can create a new database, add documents, update documents, delete documents and delete a database. Using Futon is pretty straightforward so I’m just going to leave it to you to explore it.

Creating a Database

You can create a new database via Futon. From the Futon index page, click on the ‘create database’ link to create a new database. This will create a new database where you can add new documents.

Adding New Documents

You can add new documents by making a curl request to port 5984 of your localhost. Here’s an example:

1
curl -X POST http://127.0.0.1:5984/test_db/ -d '{"name": "Ash Ketchum", "age": 10, "type": "trainer"}' -H "Content-Type: application/json"

Here’s a breakdown of the options we have passed to curl:

  • -X POST http://127.0.0.1:5984/test/ – the -X option is used to specify the type of request and the host. In this case the host is the URL in which CouchDB is running. And the type of request is POST
  • -d '{"name": "Ash Ketchum", "age": 10, "type": "trainer"}'-d is used for specifying the data that you want to submit. In this case were using a JSON string to represent the data. Note that there are no fields that are required by CouchDB. But its helpful to specify a type field so that we can easily query documents later on based on their type.
  • -H "Content-Type: application/json"-H is used for specifying the header type.

Executing the command above will return something similar to the following:

1
2
3
4
5
{
    "ok":true,
    "id":"cc6b37f1e6b2215f2a5ccac38c000a43",
    "rev":"1-61280846062dcdb986c5a6c4aa9aaf03"
}

Usually this is the status of the request (ok), the id assigned to the document (id), and the revision number (rev).

Retrieving Documents

You can retrieve all documents from a specific database by using a GET request:

1
curl -X GET http://127.0.0.1:5984/test_db/_all_docs 

This returns the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
{
    "total_rows":1,
    "offset":0,
    "rows":[
        {
            "id":"cc6b37f1e6b2215f2a5ccac38c000a43",
            "key":"cc6b37f1e6b2215f2a5ccac38c000a43",
            "value":{
                "rev":"1-61280846062dcdb986c5a6c4aa9aaf03"
            }
        }
    ]
}

Note that this only returns the id, key and value of the document and not the actual contents of the documents. If you also need to return the contents, just add the include_docs as a query parameter and set its value to true:

1
curl -X GET http://127.0.0.1:5984/test_db/_all_docs?include_docs=true

If you want to retrieve a specific document, use the document id:

1
curl -X GET http://127.0.0.1:5984/test_db/cc6b37f1e6b2215f2a5ccac38c000a43

If you want to retrieve a specific revision, you can supply rev as a query parameter and then use the revision id as the value.

1
curl -X GET http://127.0.0.1:5984/test/cc6b37f1e6b2215f2a5ccac38c000a43?rev=1-61280846062dcdb986c5a6c4aa9aaf03

Updating Documents

You can update documents by using the document id and the revision id. All you have to do is make a PUT request to the database that you want to update and add the document id as a path. And then supply the updated data along with the revision that you want to update:

1
curl -X PUT http://127.0.0.1:5984/test_db/cc6b37f1e6b2215f2a5ccac38c000a43 -d '{"_rev": "1-61280846062dcdb986c5a6c4aa9aaf03", "name": "Ash Ketchum", "age": 12, "type": "trainer"}' -H "Content-Type: application/json"

It should return something similar to the following if the update was successful:

1
2
3
4
5
{
    "ok":true,
    "id":"cc6b37f1e6b2215f2a5ccac38c000a43",
    "rev":"2-0023f19d7d3097468a8eeec014018840"
}

Revisions is an important feature that comes with CouchDB. Its like a built-in version control for each document. You can always go back to a previous version of a specific document as long as you haven’t deleted it.

Deleting Documents

You can delete a document by using the same path as updating documents or when you’re retrieving them. The only difference is you need to use a DELETE request and supply the revision id as a query parameter:

1
curl -X DELETE http://127.0.0.1:5984/test_db/cc6b37f1e6b2215f2a5ccac38c000a43?rev=2-0023f19d7d3097468a8eeec014018840

This deletes the second revision of the document. If you check the document from Futon, you will no longer see it there. But you will still be able to get a specific revision which haven’t been deleted if you supply the previous revision id in the request for getting a specific document:

1
curl -X GET http://127.0.0.1:5984/test_db/cc6b37f1e6b2215f2a5ccac38c000a43?rev=1-61280846062dcdb986c5a6c4aa9aaf03

Backup and Restore

Unlike Phpmyadmin, Futon doesn’t come with backup and restore capabilities. Good thing we have this awesome guy who created a backup and restore utility for CouchDB. Just download the couchdb-backup.sh file from the Github repo and place it somewhere in your computer.

To backup a specific database, just use the bash command and supply the filename of the backup utility. You supply the -b option if you want to backup and -r if you want to restore. -H is the host, if you don’t supply the port it uses 5984 by default. -d is the name of the database. -f is the filename of the backup file that will be created. -u is the admin username that you use for logging in to Futon. And -p is the password:

1
bash couchdb-backup.sh -b -H 127.0.0.1 -d test_db -f test_db.json -u your_username -p your_password

To restore the backup, just supply the -r option instead of -b:

1
bash couchdb-backup.sh -r -H 127.0.0.1 -d test_db -f test_db.json -u your_username -p your_password

Views

Views are used to query the database for a specific data. If you’re coming from the RDBMS land, you usually select specific data using the SELECT command. And then you use WHERE to get what you want. Once you’re done, you call it a day. With CouchDB its different. Because it doesn’t come with functions that allows you to select specific data easily. In CouchDB we need to use views. A view is basically just a JavaScript function that emits the documents that you need.

Before we move on with working with views, you can add the following document to your CouchDB database if you want to follow along:

1
2
3
4
5
6
7
{"new_edits":false,"docs":[
{"_id":"cc6b37f1e6b2215f2a5ccac38c000e58","_rev":"1-cbc1dd4e0dd53b3f9770bb8edc30ae33","name":"pikachu","type":"electric","trainer":"ash","gender":"m"},
{"_id":"cc6b37f1e6b2215f2a5ccac38c001e2c","_rev":"2-fbe6131ea1248b83301900a5954dec6d","name":"squirtle","type":"water","trainer":"ash","gender":"m"},
{"_id":"cc6b37f1e6b2215f2a5ccac38c0020d9","_rev":"1-8f98424393470486d60cf5fff00f33d3","name":"starmie","type":"water","trainer":"misty","gender":"f"},
{"_id":"cc6b37f1e6b2215f2a5ccac38c00215e","_rev":"1-aac04234d60216760bd9e3f89fa602e9","name":"geodude","type":"rock","trainer":"brock","gender":"m"},
{"_id":"cc6b37f1e6b2215f2a5ccac38c0030b4","_rev":"1-280586eb35fc3bde31f88ec9913f3dcb","name":"onix","type":"rock","trainer":"brock","gender":"m"}
]}

What you see above is a backup file which you can restore by using the backup and restore utility which I introduced earlier.

Creating a View

You can create a view by selecting your database from Futon. From there, look for the view dropdown box and then select ‘temporary view…’. This allows you to test and create a view. Enter the following in the view code box:

1
2
3
function(doc) {
   emit(doc.type, null);
}

Click on ‘run’ to run it. This will list all of the documents in the database using the type field as its key. We have set the value to null because we don’t need it. The value can be set to doc and then the value that’s returned will be the actual contents of the document. You can do that but its not really good practice since it consumes a lot of memory. Once you see some output you can now click on ‘save as’ and then supply the name of the design document and the view name. You can name those with any name you want but its good practice to give the design document a name which represents the type of document. In this case its ‘pokemon’. And the view name would be the key that you use. Some folks usually prefix it with by_. I also prefer it so I’ll name the view ‘by_type’. Click on ‘save’ once you’re done giving the names.

Here’s how you can use the view:

1
curl "http://127.0.0.1:5984/test_db/_design/pokemon/_view/by_type?key=%22water%22"

Breaking it down, the first part of the URL is the host where CouchDB is running:

1
http://127.0.0.1:5984

Next is the database:

1
test_db

And then you specify the name of the design document by supplying _design followed by the name of the design document:

1
_design/pokemon

Next you also need to specify the view:

1
_view/by_type

And then lastly, your query:

1
key=%22water%22"

Note that you need to supply a URL encoded query. %22 represents double-quotes so were wrapping the actual query with %22 instead of double-quotes. Executing it would return the following. Basically the same as what you seen in Futon but this time its filtered according to the value you supplied as the key:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
    "total_rows":5,
    "offset":3,
    "rows":[
        {
            "id":"cc6b37f1e6b2215f2a5ccac38c001e2c",
            "key":"water",
            "value":null
        },
        {
            "id":"cc6b37f1e6b2215f2a5ccac38c0020d9",
            "key":"water",
            "value":null
        }
    ]
}

So the idea of views is that you have to emit the value for the field that you want to perform your query on. In this case we have emitted the type field.

Working with Node.js

You can work with CouchDB using the Nano package. You can install it in your project by executing the following command:

1
npm install nano --save

To use nano, create a new JavaScript file and name it app.js. Then you can connect to CouchDB by adding the following code:

1
var nano = require('nano')('http://localhost:5984');

If you already have a specific database to work with, you can connect to it by using the db.use method and then supply the name of the database as the argument:

1
var test_db = nano.db.use('test_db');

Creating New Documents

You can create new documents by using the insert method:

1
2
3
4
5
6
7
8
9
10
11
var data = { 
    name: 'pikachu', 
    skills: ['thunder bolt', 'iron tail', 'quick attack', 'mega punch'], 
    type: 'electric' 
};

test_db.insert(data, 'unique_id', function(err, body){
  if(!err){
    //awesome
  }
});

The insert method takes up the data that you want to save as its first argument, the id as its second argument and the third is the function that will be called once it gets a response. Note that the id is optional, so you can choose to supply a value or not. If you didn’t supply a value for it then CouchDB will automatically generate a unique id for you.

Retrieving Documents

Views are still utilized when retrieving specific documents from CouchDB in Nano. The view method is used for specifying which view you want to use. This method takes the name of the design document as its first argument, the name of the view as its second and then the query parameters that you want to pass in as the third argument. The fourth argument is the function that you want to execute once a response has been received:

1
2
3
4
5
6
7
8
9
var type = 'water';
db.view('pokemon', 'by_type', {'key': type, 'include_docs': true}, function(err, body){
    
    if(!err){
        var rows = body.rows; //the rows returned
    }
    
    }
);

Updating Documents

Nano doesn’t come with an update method by default. That is why we need to define a custom method that would do it for us. Declare the following near the top of your app.js file, right after your database connection code.

1
2
3
4
5
6
7
test_db.update = function(obj, key, callback){
 var db = this;
 db.get(key, function (error, existing){ 
    if(!error) obj._rev = existing._rev;
    db.insert(obj, key, callback);
 });
}

You can then use the update method in your code:

1
2
3
4
5
6
db.update(doc, doc_id, function(err, res){
    if(!err){
        //document has been updated
    }

});

Note that you need the id of the document when performing an update. That’s why you first need to create a view that would emit a unique field as the key and the document id as the value. In this case the unique field is the name. Each Pokemon has a unique name so this works:

1
2
3
function(doc) {
   emit(doc.name, doc._id);
}

Just give this view a design name of ‘pokemon’ and a name of ‘by_name’. And then you can use this view to update a Pokemon by name. All you have to do is call the update method once you have retrieved the id and the current document.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var name = 'pikachu';
db.view('pokemon', 'by_name', {'key': name, 'include_docs': true}, function(select_err, select_body){
    if(!select_err){
        var doc_id = select_body.rows[0].id;
        var doc = select_body.rows[0].doc;
        
        //do your updates here
        doc.age = 99; //you can add new fields or update existing ones

        db.update(doc, doc_id, function(err, res){
            if(!err){
                //document has been updated
            }

        });        
    }
});

Deleting Documents

If you no longer want a specific document and you need to delete it, you can use the destroy method. This takes up the id of the document as the first argument, the revision id of the revision that you want to delete as the second argument, and then the function that you want to execute once you get a response:

1
2
3
4
5
test_db.destroy(doc_id, revision_id, function(err, body) {
    if(!err){
        //done deleting
    }
});

Conclusion

That’s it! In this tutorial you’ve learned about the basics of using CouchDB through Futon, Curl and Node.js. We have barely scratch the surface with this tutorial. Do check out the resources below if you want to learn more.

Resources

Getting Started With the Yahoo Finance API

| Comments

The Yahoo Finance API provides a way for developers to get the latest information about the stock market. How the different stocks are doing. What’s the current buying price for a single stock. How much is the difference of the current market value to that of yesterday’s, etc.

First thing that you need to do is to install the Guzzle library for PHP. This allows us to easily make http requests to the server. You can do that by adding the following on your composer.json file:

1
2
3
4
5
 {
   "require": {
      "guzzlehttp/guzzle": "~5.0"
   }
}

Then execute composer install from your terminal.

Next create a test.php file and put the following code:

1
2
3
4
<?php
require 'vendor/autoload.php';
$client = new GuzzleHttp\Client();
?>

This allows us to use guzzle from our file.

Before we move on here are the specific data that you can get from the API:

Pricing

  • a – ask
  • b – bid
  • b2 – ask (realtime)
  • b3 – bid (realtime)
  • p – previous close
  • o – open

Dividends

  • y – dividend yield
  • d – dividend per share
  • r1 – dividend pay date
  • q – ex-dividend date

Date

  • c1 – change
  • c – change & percentage change
  • c6 – change (realtime)
  • k2 – change percent
  • p2 – change in percent
  • d1 – last trade date
  • d2 – trade date
  • t1 – last trade time

Averages

  • c8 – after hours change
  • c3 – commission
  • g – day’s low
  • h – day’s high
  • k1 – last trade (realtime) with time
  • l – last trade (with time)
  • l1 – last trade (price only)
  • t8 – 1 yr target price
  • m5 – change from 200 day moving average
  • m6 – percent change from 200 day moving average
  • m7 – change from 50 day moving average
  • m8 – percent change from 50 day moving average
  • m3 – 50 day moving average
  • m4 – 200 day moving average

Misc

  • w1 – day’s value change
  • w4 – day’s value change (realtime)
  • p1 – price paid
  • m – day’s range
  • m2 – day’s range (realtime)
  • g1 – holding gain percent
  • g3 – annualized gain
  • g4 – holdings gain
  • g5 – holdings gain percent (realtime)
  • g6 – holdings gain (realtime)
  • t7 – ticker trend
  • t6 – trade links
  • i5 – order book (realtime)
  • l2 – high limit
  • l3 – low limit
  • v1 – holdings value
  • v7 – holdings value (realtime)
  • s6 – revenue

52 Week Pricing

  • k – 52 week high
  • j – 52 week low
  • j5 – change from 52 week low
  • k4 – change from 52 week high
  • j6 – percent change from 52 week low
  • k5 – percent change from 52 week high
  • w – 52 week range

Symbol Info

  • v – more info
  • j1 – market capitalization
  • j3 – market cap (realtime)
  • f6 – float shares
  • n – name
  • n4 – notes
  • s – symbol
  • s1 – shares owned
  • x – stock exchange
  • j2 – shares outstanding

Volume

  • v – volume
  • a5 – ask size
  • b6 – bid size
  • k3 – last trade size
  • a2 – average daily volume

Ratios

  • e – earnings per share
  • e7 – eps estimate current year
  • e8 – eps estimate next year
  • e9 – eps estimate next quarter
  • b4 – book value
  • j4 – EBITDA
  • p5 – price / sales
  • p6 – price / book
  • r – P/E ratio
  • r2 – P/E ratio (realtime)
  • r5 – PEG ratio
  • r6 – price / eps estimate current year
  • r7 – price /eps estimate next year
  • s7 – short ratio

Wew! Ok so that’s a lot. I’ll let you catch your breath for a second. Ready?

Ok so now were ready to make a request to the API. You can either do that from here:

1
http://download.finance.yahoo.com/d/quotes.csv?s={SYMBOLS}&f={DATA THAT WE WANT}

Or here:

1
http://finance.yahoo.com/d/quotes.csv?s={SYMBOLS}&f={DATA THAT WE WANT}

Doesn’t really matter which. Both returns the same thing. Here’s an example which you can just copy and paste into your browser’s address bar:

1
http://finance.yahoo.com/d/quotes.csv?s=GOOGL&f=abo

Breaking it down. We make a request to this URL:

1
http://finance.yahoo.com/d/quotes.csv

And then we pass in some query parameters: s and f. s represents the symbol or symbols that you want to query. And f represents the data that you want. That’s the big list that we just went through earlier. So if you want the API to return the ask, bid and open. We just need to pass in:

1
f=abo

In the example that we have. Were requesting this information for the GOOGL symbol. Which is basically Google. When this is requested in the browser, it downloads a quotes.csv file which contain something similar to the following:

1
580.36,575.90,576.35

Its a comma-separated list of all the values you requested. So 580.36 is the ask price, 575.90 is the bidding price, and 576.35 is the open price.

If you want to query more than one symbol, you just separate each symbol with a comma. So for example you want to request the stock information about Google, Apple, Microsoft and Facebook:

1
http://finance.yahoo.com/d/quotes.csv?s=GOOGL,AAPL,MSFT,FB&f=abo

Now let’s proceed with actually making this all work with PHP. First we need to create a table that will store all the information that we need. In this case, we only need the symbol, ask, bid and open values:

1
2
3
4
5
6
7
CREATE TABLE symbols (
    id INT(6) UNSIGNED AUTO_INCREMENT PRIMARY KEY,
    symbol VARCHAR(30) NOT NULL,
    ask DOUBLE,
    bid DOUBLE,
    open DOUBLE
)

Next create an indexer.php file. What this file does is to query the yahoo finance api and then save the results to a csv file. Note that we can only query up to 200 symbols per request. So we’ll have to work based on that on our code.

The first thing that the code below does is to query the number of symbols currently in the database. And then we calculate how many times we need to loop in order to update all the symbols. We also need to declare the file path of the csv file in which will save all the results from the API. And initialize it by setting its value to an empty string. Then we declare the format sabo. Which means symbol, ask, bid and open. Next we create a for loop that will keep on executing until the value of $x reaches the total loop times that we got from dividing the total number of symbols by the API limit. Inside the loop we calculate the offset value by multiplying the current value of $x by the API limit. After that, we select the symbols that we need based on that. Then we loop through the results, specifically the symbol and then put them in an array. After looping through all the results, we convert the array into a comma separated list. This allows us to use this value for querying the API. Once we get the result back, we just save it to the csv file using file_put_contents.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
<?php
require 'vendor/autoload.php';
$db = new Mysqli(HOST, USER, PASS, DB);
$client = new GuzzleHttp\Client();

$symbols_count_result = $db->query("SELECT COUNT(id) FROM symbols");
$symbol_row = $symbols_count_result->fetch_row();
$symbol_count = $symbol_row[0];

$api_limit = 200;

$loop_times = $symbol_count / $api_limit;
$loop_times = floor($loop_times) + 1;

$file = 'uploads/csv/stocks.csv';
file_put_contents($file, '');

$format = 'sabo';

for($x = 0; $x < $loop_times; $x++){

    $from = $x * $api_limit;
    $symbols_result = $db->query("SELECT * FROM symbols LIMIT '$api_limit' OFFSET '$from'");

    if($symbols_result->num_rows > 0){

        $symbols = array();
        while($row = $symbols_result->fetch_object()){
            symbols[] = $row->symbol;
        }

        $symbols_str = implode(',', $symbols);
        $stocks = $client->get("http://download.finance.yahoo.com/d/quotes.csv?s={$symbols_str}&f={$format}");

        file_put_contents($file, $stocks->getBody(), FILE_APPEND);
    }
}
?>

That’s it! The Yahoo Finance API is a really nice way of getting financial information about specific companies.

Automating Deployment to EC2 Instance With Git

| Comments

In this tutorial I’m going to show you how to automate the deployment of changes to your ec2 instance using git. Deployment is done by setting up a bare git repo somewhere in the home directory of the ec2 instance. A post-receive hook is then setup to automatically update the web root based on the changes. The post-receive hook is executed whenever a push is made to the bare git repo in the ec2 instance.

Setup SSH

First thing you need to do is to setup ssh access on your development machine (local computer). You can do that by navigating to the ssh directory:

1
cd ~/.ssh

Then open up the config file:

1
sudo nano config

Next add the Host (alias of the ec2 instance), Hostname, User and IdentityFile:

1
2
3
4
Host websitebroker
Hostname ec2-54-191-181-129.us-west-2.compute.amazonaws.com
User ubuntu
IdentityFile ~/.ssh/amazon-aws.pem

Here’s a breakdown:

  • Host – a unique name you want to give to the server. This is used for referring to the server later on.
  • Hostname – the ip address of domain name of the server.
  • User – the user used for logging in to the server. For ec2 Ubuntu instance, this is usually Ubuntu.
  • IdentityFile – the path to the amazon identity file you have downloaded when you created the ec2 instance.

You can test if the ssh configuration works by logging in to the server using the Host you’ve added.

1
ssh websitebroker

Executing the command above should log you in to the server if youre configuration is correct.

Setup Git

Once that’s done, you can login to the server using ssh and setup git like you would usually set it up on your local computer.

First you install git:

1
2
3
sudo add-apt-repository ppa:pdoes/ppa
sudo apt-get update
sudo apt-get install git-core

Once that’s done, give identity to the server:

1
2
git config --global user.name "websitebroker"
git config --global user.email websitebroker@islickmedia.com

Next, generate an ssh keypair:

1
2
ssh-keygen -t rsa -C "websitebroker@islickmedia.com"
ssh-add id_rsa

If you’re getting the following error when adding the private keyfile:

1
Could not open a connection to your authentication agent

You can try starting ssh-agent before executing ssh-add:

1
eval `ssh-agent -s`

If that doesn’t work, you can try the following solutions.

Once that’s done, you can now add the public key to bitbucket, github or any other git service you’re currently using. To do that, navigate to your ssh directory then output the contents of the id_rsa.pub file. Copy the output and paste it into the git service you’re using:

1
2
cd ~/.ssh
cat id_rsa.pub

Setup Deployment

Navigate to the home directory of the server:

1
cd /home/ubuntu

Create and navigate to the directory where were going to push our changes later on:

1
2
mkdir website.git
cd website.git

Next, setup a bare git repo:

1
git init --bare

Next create a post-receive file under the hooks directory:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#!/bin/sh
#
# An example hook script for the "post-receive" event.
#
# The "post-receive" script is run after receive-pack has accepted a pack
# and the repository has been updated.  It is passed arguments in through
# stdin in the form
#  <oldrev> <newrev> <refname>
# For example:
#  aa453216d1b3e49e7f6f98441fa56946ddcd6a20 68f7abf4e6f922807889f52bc043ecd31b7$
#
# see contrib/hooks/ for an sample, or uncomment the next line and
# rename the file to "post-receive".

#. /usr/share/doc/git-core/contrib/hooks/post-receive-email
GIT_WORK_TREE=/home/ubuntu/www
export GIT_WORK_TREE
git checkout -f

The only thing you need to change here is the GIT_WORK_TREE, which is basically the path to where changes are being pushed when someone pushes into the bare git repo. Since we want changes to take effect on the public facing website, we setup the GIT_WORK_TREE to be the www directory. Which is the directory used by Apache to serve the website.

Next, open up the config file of the bare git repo:

1
2
cd /home/ubuntu/website.git
sudo nano config

Make sure it contains something similar to the following:

1
2
3
4
5
6
7
[core]
        repositoryformatversion = 0
        filemode = true
        bare = false
        worktree = /home/ubuntu/www
[receive]
        denycurrentbranch = ignore

Next, you need to make the post-receive file executable:

1
chmod +x hooks/post-receive

Now in your development machine. Navigate to your project directory then add a new remote and call it deploy. The path would be the ssh alias you’ve given to the website earlier. In this case its websitebroker. Followed by a colon then the path to the bare git repo:

1
git remote add deploy websitebroker:/home/ubuntu/website.git

Next push the references using git push. You only have to do this for the first time.

1
git push deploy +master:refs/heads/master

Now everytime you push to your bitbucket or github remote repo. You can also push the changes to the server:

1
git push deploy master

If you want to do it in one command, you can edit the config file of your project (still on your development machine) and add the urls in there:

1
2
3
[remote "all"]
        url = https://github.com/YourGitAccount/ProjectDir.git
        url = websitebroker:/home/ubuntu/website.git

Now all you have to do is push using the all alias:

1
git push all master

Note that this deployment strategy doesn’t update the dependencies. So you still need to login to the server and update your dependencies manually.

Conclusion

Automating deployment with Git is a nice way on saving up time manually pushing the changes to the server using ftp. With the deployment strategy we’ve seen in this tutorial, you can easily push changes to your server by executing a single command in the terminal. It also gives you the advantage of being able to rollback the changes you’ve made.

Resources

Getting Started With Stripe API

| Comments

In this tutorial I’ll walk you through the Stripe’s API. Let’s start by defining what Stripe is. From the Stripe website itself:

Stripe is a developer-friendly way to accept payments online and in mobile apps.
We process billions of dollars a year for thousands of companies of all sizes.

Now we know that Stripe is a payment processor, similar to Paypal.

With Stripe, you can accept payments in three ways:

  • Embedded Form
  • Custom Form
  • Mobile App Integration

In this tutorial I’ll only be walking you through the first two: embedded form and custom form.

Embedded Form

If you do not want to bother with creating your own checkout forms, an embedded form is the way to go. An embedded form is basically Stripe’s checkout widget. All you have to do is include their script on your website, specifically inside a form element. And the front-end side is already handled for you.

1
2
3
4
5
6
7
8
9
10
<form action="/checkout.php" method="POST">
  <script
    src="https://checkout.stripe.com/checkout.js" class="stripe-button"
    data-key="pk_test_xxxxxxxxxxxxxxxxxxx"
    data-amount="1000"
    data-name="Web Development"
    data-description="Develop a website"
    data-image="http://mywebsite.com/img/logo.png">
  </script>
</form>

Breaking it down. For the script to work, you need to supply a value to the following attributes:

  • src – Stripe’s checkout script. This should be https://checkout.stripe.com/checkout.js
  • data-key – your stripe publishable key. You can find it by clicking on your username, then select ‘account settings’, then click on the ‘api keys’ tab. From there you can use either your test publishable key or your live publishable key. The test key is used for testing. This allows you to supply a fake credit card number and pay for a fake product or service. After successful payment, you can see your fake client from the customers page on your Stripe dashboard. Don’t forget to switch to test.
  • data-amount – the amount you want to charge in cents. Just multiply what you want to charge by 100. So for example you want to charge $10, you need to supply 1000 instead.
  • data-name – the name of your product or service.
  • data-description – the description of your product or service.
  • data-image – your logo. This should be an absolute url.

Next we need to install Stripe’s library via composer. Add the following code on your composer.json file:

1
2
3
4
5
{
  "require": {
    "stripe/stripe-php": "2.*"
  }
}

Once that’s done, execute composer install from your terminal. This will fetch the library from the repository.

Next create the checkout.php file and add the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<?php
require 'vendor/autoload.php';

\Stripe\Stripe::setApiKey('sk_test_xxxxxxxxxxxxxx');

$token = $_POST['stripeToken'];
$email = $_POST['stripeEmail'];


try {
    $charge = \Stripe\Charge::create(array(
      "amount" => 1000,
      "currency" => "usd",
      "source" => $token,
      "description" => $email)
    );

    print_r($charge);
}catch(\Stripe\Error\Card $e){
    echo $e->getMessage();
}
?>

Breaking it down, first we included the vendor/autoload.php file so that we can use the Stripe library on our script. Next we initialize the library by setting the Stripe secret key. Next we get the data supplied by Stripe for us from the front-end. The stripeToken is the unique token generated by Stripe, this represents the transaction that the client made on the front-end. That is, paying $10 for our service. Next is the stripeEmail, this is basically just the email supplied by the client. Next we wrap the stripe charge call in a try catch statement. This allows us to capture any error returned by the API and show it to the client. Calling the stripe charge method requires 4 arguments:

  • amount – the amount that you want to charge in cents.
  • currency – the currency code representing the currency that we want to use.
  • source – the token that stripe generated on the front-end.
  • description – a text that we want to assign to the charge. This is usually the clients email. But you can add more details such as the name of the service if you’re offering more than one product or service.

If the API call is successful, this method returns a whole bunch of data. Such as the amount paid, and the description. In most cases you’ll basically only want the id of the transaction. You can get this by accessing the id property:

1
$charge_id = $charge->id;

You can then save this on your database as a reference. But of course you can always see this on your payments page.

Custom Forms

If you need to ask additional information from your clients. Or you just want to use your own form. You can use custom forms. This allows you to write your own markup, supply your own fields and style them the way you want it with css. Here’s an example of how a custom checkout form might look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<form action="checkout.php" method="POST" id="payment-form">
  <div class="errors"></div>

  <div>
    <label for="email">Email</label>
    <input type="email" id="email" name="email">
  </div>

  <div>
    <label for="name">Name</label>
    <input type="text" id="name" name="name">
  </div>

  <div>
    <label for="card-number">Card Number</label>
    <input type="text" size="20" data-stripe="number" id="card-number" name="card-number">
  </div>

  <div>
    <label for="cvc">Security Code</label>
    <input type="text" size="4" data-stripe="cvc" id="cvc" name="cvc">
  </div>

  <div>
    <label>Expiration (MM/YYYY)</label>
    <input type="text" data-stripe="exp-month" name="exp-month">
    <span> / </span>
    <input type="text" data-stripe="exp-year" name="exp-year"/>
  </div>

  <button type="submit">Pay</button>
</form>

This form works with the Stripe checkout script by adding the data-stripe attribute to the following fields. Just supply the value on the right-hand side as the value for the data-stripe attribute depending on the field:

  • card number – card-number
  • security code or cvc – cvc
  • card expiration month – exp-month
  • card expiration year – exp-year

Next we need to include the stripe checkout script:

1
<script type="text/javascript" src="https://js.stripe.com/v2/"></script>

And then set the publishable key. This allows stripe to identify which stripe account the request came from:

1
2
3
<script>
Stripe.setPublishableKey('pk_test_xxxxxxxxxxxxxxxxxxx');
</script>

Next we need to define the method that will process the response that we get from Stripe when the client submits the payment form. This takes up 2 parameters: status and response. The status is the status code. The response contains the actual Stripe response. This is an object containing information about the transaction. One of those is the id which is basically the token that we need to pass in to the back-end. All we have to do is to append it to the payment form so it gets submitted with the rest of the fields. If there is an error with the request, an error property becomes available in the response object. This contains the error message, we just show it to the user by supplying it as a value to the errors div. After that, we enable the submit button so the client can fix the errors and submit the form again:

1
2
3
4
5
6
7
8
9
10
11
12
function processStripeResponse(status, response){
  var form = $('#payment-form');

  if(response.error){
    form.find('.errors').text(response.error.message);
    form.find('button').prop('disabled', false);
  }else{
    var token = response.id;
    form.append($('<input type="hidden" name="stripeToken" />').val(token));
    form.get(0).submit();
  }
};

Next we define the event handler for when the payment form is submitted. This calls the createToken method which requires the payment form and the response handler as its arguments. Don’t forget to return false so the form doesn’t get submitted. The response handler will be the one which will trigger the submit if the response doesn’t have any errors:

1
2
3
4
5
6
7
8
9
10
$(function(){
  $('#payment-form').submit(function(event) {
    var form = $(this);
    form.find('button').prop('disabled', true);

    Stripe.card.createToken(form, processStripeResponse);

    return false;
  });
});

On the back-end we can just use the previous code and use the custom fields that we added. Note that the stripeToken field stays the same. We don’t need to pass the card number, security code and expiration date:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<?php
require 'vendor/autoload.php';

\Stripe\Stripe::setApiKey('sk_test_xxxxxxxxxxxxxx');

$token = $_POST['stripeToken'];

$email = $_POST['email'];
$name = $_POST['name'];

try {
    $charge = \Stripe\Charge::create(array(
      "amount" => 1000,
      "currency" => "usd",
      "source" => $token,
      "description" => $email)
    );

    print_r($charge);
}catch(\Stripe\Error\Card $e){
    echo $e->getMessage();
}
?>

Conclusion

That’s it! You’ve learned how to interact with the Stripe API in order to easily process payments.

Resources

How to Assign a Namecheap Domain Name on a DigitalOcean Droplet

| Comments

In this quick tip I’ll be showing you how to assign a domain name bought from namecheap into your digitalocean droplet.

First login to your namecheap account. Click on the manage domains menu that can be found under your username. Click on the domain name that you want to assign. On the menus on the left side, under the general section. Click on the Domain name server setup link. Once in that page, select the specify custom DNS servers option. And then enter the following:

  • ns1.digitalocean.com
  • ns2.digitalocean.com
  • ns3.digitalocean.com

Next login to your digitalocean account and navigate to the droplet that you want to use. From your droplets main page, click on the DNS link. Next click on the Add Domain button. This shows the form for adding a new domain. Set the value of the name to the domain name that you’re trying to assign and the ip address to the ip address of your droplet or just select your droplet from the dropdown.

dns settings

Once you’ve filled all that out, click on the create domain button.

That’s it! Just wait for about 30 minutes to an hour for the settings to propagate. And once that’s done, you can now access your droplet using the domain name that you assigned.

Introduction to Antares Web

| Comments

Welcome to yet another promotional post on another side-project of mine. This time its the web version of Antares. If you don’t know what Antares is, its basically a news app for Android. Its a news app targeted to developers to be exact. You can read all about it here: Introduction to Antares.

So yeah, Antares web is just a website were you can read all the news from popular sources such as Hacker News, Product Hunt, Medium, Designer News, Slashdot and many others. There are also news coming from popular curators such as echojs and developer newsletters. The news items are ordered from the latest to the least latest so you’re assured that the ones on top are the latest one. Antares uses infinite scrolling. So if you missed yesterday’s news, you can always scroll until you find something you’re interested in reading.

Future Plans

  • More news sources.
  • Viewing of news on a specific date.
  • Top news. Something simple like logging the view count on each link based on the number of clicks it gets. And then ordering the results from the most number of views to the least.
  • Mobile version. Currently it doesn’t look that good on mobile. Especially on devices below 400px width.
  • Social Sharing. Facebook, Twitter, LinkedIn sharing. And possibly Google plus. I’ll just add it as a button below each news link, so that users can easily share. Integration with my Ahead project seems to be a good idea as well. So users can easily schedule posts to their social accounts for later publishing.
  • Bookmarking. I’m looking at pocket integration. So each news link will have its own button for saving to pocket.

That’s all for now. If you want to know more about this project. You can always visit its project page. If you’re a developer, you can check out this project at Github. Feel free to contribute.