Wern Ancheta

Adventures in Web Development.

Best Anime of All Time

| Comments

I decided to give my blog 3 weeks break so I could make time for the 200 other things that I want to do. But then I said “fuck it”. Its not just programming stuff that I can publish here on this blog. Its my personal blog after all. I can always publish some other stuff that won’t take much of my time to write. So this time I decided to disguise the list of the best anime of all time as an actual blog post. But of course, this is all just my opinion. We all have different taste so don’t take my word for it. Try watching 2 or 3 episodes and see for yourself. Ok here goes:

  • Psycho Pass
  • Code Geass
  • Samurai Champloo
  • Anohana
  • Guilty Crown
  • Xam’d: Lost Memories
  • Parasyte the Maxim
  • Durarara!!
  • Eden of the East
  • Darker than Black
  • Full Metal Alchemist: Brotherhood
  • Steins;Gate
    1. Gray Man
  • Hunter X Hunter
  • The Melancholy of Haruhi Suzumiya
  • K-on
  • Hajime no Ippo
  • Katanagatari
  • Tengen Toppa Gurren Lagan
  • Kill la Kill
  • Haikyuu!!
  • Kuroko no Basket
  • Gatchaman Crowds
  • Tsuritama
  • Death Note
  • Yu Yu Hakusho
  • Attack on Titan
  • Avatar: The Last Airbender
  • Avatar: The Legend of Korra
  • Mirai Nikki
  • Toradora!
  • Kaichou wa Maid-sama!
  • Medaka Box
  • Accel World
  • Deadman Wonderland
  • Magi
  • Shaman King
  • Baccano!
  • Sket Dance
  • Akame Ga Kill
  • Nanatsu no Taizai
  • Slam Dunk
  • Assasination Classroom
  • Oregairu
  • Shokugeki no Soma
  • Hitsugi no Chaika
  • One Week Friends
  • Kakumeiki Valvrave
  • Yowamushi Pedal
  • Hamatora
  • Zankyou no Terror
  • Bakuman
  • Usagi Drop
  • Hanasaku Iroha
  • Tiger & Bunny
  • A-Channel

That’s all I can think of for now. I really have a bad memory so even if I’ve watched a really really good anime, it might not have made it in this list.

Quick Tip: How to Add Custom Pages in Wordpress

| Comments

In this quick-tip I’ll be showing you the easiest and quickest way to create custom pages under a specific theme in Wordpress. When I say custom, its a page where you can put anything you want using HTML, CSS, JavaScript and PHP Code. The page would also have access to the various APIs that Wordpress provides.

To start, create a new file under your theme folder. In this case I’ll be creating a custom-page.php file under the wp-content/themes/twentyfifteen directory of my Wordpress installation. Then add the following code in the file:

1
2
3
4
5
6
<?php
/*
Template Name: My Awesome Custom Page
*/
?>
<h1>This is my awesome custom page</h1>

Yes, that’s all there is to it. Note that the Template Name: part is very important. You can assign any value that you want as long as its descriptive. This specific comment is used by Wordpress to recognize your file.

To assign this page to a specific Wordpress page. You can add a new page from Wordpress admin page and select the page that we have created under the Template drop-down:

custom wordpress page

Now when you access the page from your browser, you will get that awesome heading. From your custom page you can also use the methods available on all the Wordpress APIs and also the built-in theme functions such as the get_header and get_footer.

Getting Started With Amazon S3

| Comments

Amazon S3 is Amazon’s file storage service. It allows users to upload their files to their server, for later access or for sharing to other people. In this tutorial I’m going to walk you through how to use amazon s3 within your PHP applications.

First thing that you need to do is create a composer.json file and add the following:

1
2
3
4
5
{
    "require": {
        "aws/aws-sdk-php": "2.7.*@dev"
    }
}

Next execute composer install from your terminal to install the Amazon Web Service SDK.

Once the installation is done you can now create a tester.php file which we will use for interacting with the Amazon AWS API. Add the following code to the file:

1
2
3
4
5
6
<?php
require 'vendor/autoload.php';

use Aws\S3\Exception\S3Exception;
use Aws\Common\Aws;
?>

What the code above does is include the autoload file so that we can use the AWS SDK from our file. Next we set to use the Aws\S3\Exception\S3Exception and Aws\Common\Aws namespace so can access the different classes that are available in those namespaces. One of which classes is the Aws class which we can use to set the configuration options for the Bucket where we are trying to connect to. All we have to do is call the factory method and pass in the path to the configuration file:

1
2
3
<?php
$aws = Aws::factory('config.php');
?>

The configuration file contains the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<?php
return array(
    'includes' => array('_aws'),
    'services' => array(
        'default_settings' => array(
            'params' => array(
                'credentials' => array(
                    'key'    => 'YOUR_AWS_API_KEY',
                    'secret' => 'YOUR_AWS_API_SECRET',
                ),
                'region' => 'YOUR_BUCKET_REGION'
            )
        )
    )
);
?>

The configuration file basically just returns an array that contains the options that we need. First of those is the includes, which allows us to bootstrap the configuration file with AWS specific features. Next is the services where we specify the API credentials and region.

Uploading Files

Once that’s done we can now upload files to the s3 bucket of your choice by using the $aws object and calling the get method. This method takes up the name of the AWS service you want use. In this case were using s3 so we put in s3. Next we call the putObject method on the $s3 object and pass in the required parameters as an array. The required keys are Bucket, Key, Body and ACL. Bucket is the name of the bucket where you want to upload the file. Key is the path to the file. With s3 you don’t have to worry if the directory where you are uploading the file already exists. No matter how deep it is, s3 automatically creates the directories for you. Next is the Body which takes up the results of the fopen method call. This method takes up the path to the file in your local computer and the operation you want to perform. In this case we just want to read the file contents so we specify r. Next is the ACL or the Access Control List of an object. Its basically like a file permission. Here we specified public-read which means that the file can be read publically. For more information about ACL, you can check out this page. We wrap all of those code inside a try catch so we can handle errors gracefully.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<?php
$s3 = $aws->get('s3');

try{
    $s3->putObject(array(
        'Bucket' => 'NAME_OF_BUCKET',
        'Key' => '/path/to/file/filename',
        'Body' => fopen('/path/to/file_to_uploads', 'r'),
        'ACL' => 'public-read',
    ));
}catch (S3Exception $e){
    echo "There was an error uploading the file.<br>";
    echo $e->getMessage();
}
?>

Deleting Files

Next here’s how to delete existing files from your s3 bucket. This uses the deleteObject method which takes up the name of the bucket and the path to the file as its argument.

1
2
3
4
5
6
7
8
9
10
11
12
13
<?php
try{

    $s3->deleteObject(array(
        'Bucket' => 'NAME_OF_BUCKET',
        'Key' => '/path/to/file/filename'
    ));

}catch(S3Exception $e){
    echo "There was an error deleting the file.<br>";
    echo $e->getMessage();
}
?>

Listing Buckets

Lastly here’s how to get a list of buckets that are currently in your Amazon Account:

1
2
3
4
5
6
7
<?php
$result = $s3->listBuckets();

foreach ($result['Buckets'] as $bucket) {
    echo "{$bucket['Name']} - {$bucket['CreationDate']}<br>";
}
?>

Conclusion

That’s it! In this tutorial you’ve learned how to work with Amazon S3 from within your PHP applications. Specifically, we’ve taken a look at how to upload files, delete files and list buckets.

Resources

Building a Nearby Places Search App With Google Places API

| Comments

In this tutorial were going to build an app that would allow users to search for a specific place and then find nearby places based on a specific category. Such as restaurants, churches, and schools. We will implement the app with Google Maps, Google Places and PHP.

Getting API Credentials

First you need to get API Credentials from your Google Console and then enable the Google Maps and Google Places API. If you don’t know how to do that, feel free to ask Google. I believe this topic has already been written before. Here are the APIs that you need to enable:

  • Google Maps JavaScript API
  • Google Places API Web Service

Building the App

Now were ready to build the app. First lets work on the back-end side of things.

Getting Results from the Places API

To make our life easier, were going to use a library for making the request to the Google Places API. Add the following in your composer.json file:

1
2
3
4
5
{
    "require": {
        "joshtronic/php-googleplaces": "dev-master"
    }
}

Once you’re done, execute composer install on your terminal to install the library. Now we can use the library like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<?php

require 'vendor/autoload.php';

$google_places = new joshtronic\GooglePlaces('YOUR_GOOGLE_API_KEY');

$lat = $_POST['lat']
$lng = $_POST['lng'];
$place_types = $_POST['place_types'];

$google_places->location = array($lat, $lng);
$google_places->radius = 8046; //hard-coded radius
$google_places->types = $place_types;
$nearby_places = $google_places->nearbySearch();

?>

Breaking it down. First we include the autoload file so we can access the library from our file:

1
2
3
<?php
require 'vendor/autoload.php';
?>

Next, we created a new instance of the GooglePlaces class. You need supply the API Key that you got earlier from your Google Console:

1
2
3
<?php
$google_places = new joshtronic\GooglePlaces('YOUR_GOOGLE_API_KEY');
?>

Next, we get the data that we will be supplying later on in the client-side and assign them to their own variables:

1
2
3
4
5
<?php
$lat = $_POST['lat']
$lng = $_POST['lng'];
$place_types = $_POST['place_types'];
?>

Lastly, we make the actual request to the Google Places API. This library works a little bit different from your usual one. In the sense that we pass in the parameters needed by the actual search method using the object that we got from declaring a new instance of the GooglePlaces class. The first thing that we need to pass is the location, this takes up an array containing the coordinates (latitude and longitude) of the place that we are using as a reference point. This is basically the place where we are at, the place where we want to find nearby places on. Next you need to supply the radius. This is how many meters from your reference point you want your search to be limited. In this case we supplied a hard-coded value of 8046 meters, which is about 8 kilometers. If you want the user to have more control over this value, you can try adding a slider that the user can use to change the radius. And the last one is the types, this is an array of the types of places you want to see in the results. An example of this is restaurants (yeah I’m hungry so I mentioned this twice now), parks, shopping center, etc. Once you’ve supplied those, you can now call the nearbySearch method. This will make the request to the API and return the data that we need. We just have to turn it into a JSON string so it can be parsed and read later on from the client-side.

1
2
3
4
5
6
7
8
<?php
$google_places->location = array($lat, $lng);
$google_places->radius = 8046; //hard-coded radius
$google_places->types = $place_types;
$nearby_places = $google_places->nearbySearch();

echo json_encode($nearby_places);
?>

Creating the Map

Next we move on to the client-side. Create a new index.html file and put the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <title>gmap</title>
  <link rel="stylesheet" href="style.css">
  <script src="http://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script>
  <script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?key=YOUR_GOOGLE_API_KEY&sensor=false&libraries=places"></script>
</head>
<body>
  <div id="map-container">
    <input type="text" id="search">
    <div id="map-canvas"></div>
  </div>

  <div id="place-types">
    <ul>
      <li>
        <input type="checkbox" data-type="bar"> bar
      </li>
      <li>
        <input type="checkbox" data-type="bus_station"> bus station
      </li>
      <li>
        <input type="checkbox" data-type="hospital"> hospital
      </li>
      <li>
        <input type="checkbox" data-type="health"> health
      </li>
      <li>
        <input type="checkbox" data-type="police"> police
      </li>
      <li>
        <input type="checkbox" data-type="post_office"> post office
      </li>
      <li>
        <input type="checkbox" data-type="store"> store
      </li>
      <li>
        <input type="checkbox" data-type="library"> library
      </li>
      <li>
        <input type="checkbox" data-type="fire_station"> fire station
      </li>
      <li>
        <input type="checkbox" data-type="gas_station"> gas station
      </li>
      <li>
        <input type="checkbox" data-type="convenience_store"> convenience store
      </li>
      <li>
        <input type="checkbox" data-type="school"> school
      </li>
    </ul>
    <button id="find-places">Find Places</button>
  </div>

  <script src="map.js"></script>
</body>
</html>

Breaking it down. We include the stylesheet in the page:

1
<link rel="stylesheet" href="style.css">

Then we include jQuery and the Google Maps JavaScript library. Be sure to update the code so it uses your Google API Key:

1
2
<script src="http://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script>
  <script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?key=YOUR_GOOGLE_API_KEY&sensor=false&libraries=places"></script>

Next is the map container where we have map-canvas that will serve as the element where the map will be created. And the search box where the user will search for the place that will be used as a reference point:

1
2
3
4
<div id="map-container">
    <input type="text" id="search">
    <div id="map-canvas"></div>
  </div>

And then the type of places that we can find. Note that this isn’t everything we can find in Google Places API. I just picked some of the places that I think are essential. For a more complete list you can check this page. Here we added the data-type attribute which represents the place type. And then after the list we have the ‘Find Places’ button which basically just triggers the search:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
<div id="place-types">
  <ul>
    <li>
      <input type="checkbox" data-type="bar"> bar
    </li>
    <li>
      <input type="checkbox" data-type="bus_station"> bus station
    </li>
    <li>
      <input type="checkbox" data-type="hospital"> hospital
    </li>
    <li>
      <input type="checkbox" data-type="health"> health
    </li>
    <li>
      <input type="checkbox" data-type="police"> police
    </li>
    <li>
      <input type="checkbox" data-type="post_office"> post office
    </li>
    <li>
      <input type="checkbox" data-type="store"> store
    </li>
    <li>
      <input type="checkbox" data-type="library"> library
    </li>
    <li>
      <input type="checkbox" data-type="fire_station"> fire station
    </li>
    <li>
      <input type="checkbox" data-type="gas_station"> gas station
    </li>
    <li>
      <input type="checkbox" data-type="convenience_store"> convenience store
    </li>
    <li>
      <input type="checkbox" data-type="school"> school
    </li>
  </ul>
  <button id="find-places">Find Places</button>
</div>

And then lastly we include the map.js file which will make this all work:

1
<script src="map.js"></script>

Next create the style.css file and put the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#map-container {
  float: left;
}

#map-canvas {
  height: 500px;
  width: 1000px;
}

#place-types {
    float: left;
}

#place-types ul li {
    list-style: none;
}

Finally we move on to the map.js file. First declare the default coordinate of the place that the map will display:

1
2
3
var lat = 18.35827827454; //default latitude
var lng = 121.63744354248; //default longitude
var home_coordinates = new google.maps.LatLng(lat, lng); //set default coordinates

Next, assign it to the map:

1
2
3
4
5
var map_options = {
  center: new google.maps.LatLng(lat, lng), //set map center
  zoom: 17, //set zoom level to 17
  mapTypeId: google.maps.MapTypeId.ROADMAP //set map type to road map
};

Next we set the search box as an auto-complete element. This will allow the user to see suggestions of matching locations as he types in the search box. We also need to bind it to the map so the auto-complete bounds are driven by the current viewport of the map.

1
2
3
var input = document.getElementById('search'); //get element to use as input for autocomplete
var autocomplete = new google.maps.places.Autocomplete(input); //set it as the input for autocomplete
autocomplete.bindTo('bounds', map); //bind auto-complete object to the map

Next we listen for the place_changed event that is triggered from the search box. If this event happens then we get the place information using the getPlace method available on the auto-complete object. This allows us to check if the place being searched is within the current map viewport. If it is then we just call the fitBounds method on the map object and pass in the geometry.viewport attribute from the place object. This sets the map center to the coordinates of the location. If its not within the current viewport then we call the setCenter method in the map object and pass in the geometry.location attribute in the place object. We also call the setZoom method in the map to ensure we still got the same zoom level. Lastly we set the position of the home_marker to the geometry.location in the place object.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
//executed when a place is selected from the search field
google.maps.event.addListener(autocomplete, 'place_changed', function(){

    //get information about the selected place in the autocomplete text field
    var place = autocomplete.getPlace();

    if (place.geometry.viewport){ //for places within the default view port (continents, countries)
      map.fitBounds(place.geometry.viewport); //set map center to the coordinates of the location
    } else { //for places that are not on the default view port (cities, streets)
      map.setCenter(place.geometry.location);  //set map center to the coordinates of the location
      map.setZoom(17); //set a custom zoom level of 17
    }

    home_marker.setMap(map); //set the map to be used by the  marker
    home_marker.setPosition(place.geometry.location); //plot marker into the coordinates of the location

});

Next we declare an array that will store the markers for the places that will be searched. Don’t confuse this with the place used as the reference point, the home_marker is used for this. The places I’m referring to are the place types such as grocery, church, etc. For convenience I’ll be referring to those markers as place type markers.

1
var markers_array = [];

Next create the method that would remove the place type markers from the map. We would need to call this every time the user clicks on the ‘Find Places’ button so that the previous search results will be removed from the map.

1
2
3
4
5
function removeMarkers(){
  for(i = 0; i < markers_array.length; i++){
    markers_array[i].setMap(null);
  }
}

Finally we have the method that listens for the click event on the ‘Find Places’ button. The first thing it does is to get the coordinates of the home_marker. This represents the coordinates of the reference point. After that, we declare an empty array, this is where we will store the place types selected by the user. We do that by looping through all the place types selected by the user and then we push the value for their data-type attribute in the array. Next we call the removeMarkers method to remove the place types markers that are currently plotted on the map. Next we make a POST request to the server and then passing in the coordinates of the reference point and the place types array. Once we get a response, we call the JSON.parse method so we can extract the results from the response. From there we loop through all the results and get the coordinates for each and then we plot the marker into the map. After that we assign an infowindow to each of the markers to that when its clicked it shows the name of the place.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
$('#find-places').click(function(){

  var lat = home_marker.getPosition().lat();
  var lng = home_marker.getPosition().lng();

  var place_types = [];

  //loop through all the place types that has been checked and push it to the place_types array
  $('#place-types input:checked').each(function(){
    var type = $(this).data('type');
    place_types.push(type);
  });

  removeMarkers(); //remove the current place type markers from the map

  //make a request to the server for the matching places
  $.post(
    'places.php',
    {
      'lat': lat,
      'lng': lng,
      'place_types': place_types
    },
    function(response){

      var response_data = JSON.parse(response);

      if(response_data.results){
        var results = response_data.results;
        var result_count = results.length;

        for(var x = 0; x < result_count; x++){

          //get coordinates of the place
          var lat = results[x]['geometry']['location']['lat'];
          var lng = results[x]['geometry']['location']['lng'];

          //create a new infowindow
          var infowindow = new google.maps.InfoWindow();

          //plot the marker into the map
          marker = new google.maps.Marker({
            position: new google.maps.LatLng(lat, lng),
            map: map,
            icon: results[x]['icon']
          });

          markers_array.push(marker);

          //assign an infowindow to the marker so that when its clicked it shows the name of the place
          google.maps.event.addListener(marker, 'click', (function(marker, x){
            return function(){
              infowindow.setContent("<div class='no-scroll'><strong>" + results[x]['name'] + "</strong><br>" + results[x]['vicinity'] + "</div>");
              infowindow.open(map, marker);
            }
          })(marker, x));


        }
      }

    }
  );

});

Here’s a screenshot of the final output:

google places

Conclusion

That’s it! In this tutorial you’ve learned how to work with the Google Place API in PHP. We have also create a simple app that would allow users to search specific types of places that is near the location used as a reference point. If you want to learn more, be sure to check out the resources below.

Resources

Working With Youtube Data API in PHP

| Comments

Decades ago I got this project where I needed to work with the Youtube API to get the details of videos uploaded by a specific channel. And then create something like a mini-youtube website out of it. Just kidding about the decades part, it was probably 4-6 months ago. Anyway its only this time that I got the time to actually write about it. So here it goes.

Getting API Credentials

First you need to get the API credentials from your Google Console. There’s only a single API credential for all of the APIs that Google offers. So you might already have one. If you do then all you have to do is enable the API in your Google Console page. Currently you would see something like this when you go to APIs & Auth and then click on APIs on your Google Console:

google apis

What we need is the Youtube Data API v3. Click that and enable it. If you do not have an API credential then you can click on ‘Credentials’ under the APIs & Auth and click on ‘Create new Key’ under the Public API Access section. Choose Server Key as the key type since were working primarily on the server. Don’t take my word for it though. Based on my experience sometimes this doesn’t work and you actually need to select Browser Key. I just hope google has fixed this already. Server keys are only supposed to be used in the server and browser keys on the client side. Clicking on either browser key or server key will generate an API Key for you. This is the key that you will use when you need to talk to the Youtube API.

Dependencies

As we are primarily going to be requesting data from another server, we will need curl. If you don’t have it yet, install it on your system. Here’s how you install it on Ubuntu:

1
2
3
sudo apt-get install curl
sudo apt-get update
sudo apt-get install libcurl3 php5-curl

If you’re using another Operating System then feel free to ask Google.

Playing with the API

To make things easier we need a library that will do most of the heavy-lifting for us. Things like signing the request, constructing it and actually making the request to the server. Because were lazy folks we don’t need to do that every time we need to talk to an API. Thankfully an awesome guy in the alias of madcoda has already done that work for us. If you already have composer installed, simply execute the following command inside your project directory:

1
composer require madcoda/php-youtube-api

This will install the library into your vendor directory, autoload it and add it to your composer.json file.

Once its done you can now use the library by including the autoload.php file under the vendor directory and then use the Madcoda\Youtube namespace.

1
2
3
4
5
<?php
require 'vendor/autoload.php';

use Madcoda\Youtube;
?>

Next create a new instance of the Youtube class and pass in the API Key that you acquired earlier as the key item in an array.

1
2
3
<?php
$youtube = new Youtube(array('key' => 'YOUR_API_KEY'));
?>

Searching

With this library you can search for videos, playlists and channels by using the search method. This method takes up your query as its argument. For example you want to find ‘Awesome’:

1
2
3
<?php
$results = $youtube->search('Awesome');
?>

This will return something similar to the following if you use print_r on the $results:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Array
(
[0] => stdClass Object
    (
        [kind] => youtube#searchResult
        [etag] => "tbWC5XrSXxe1WOAx6MK9z4hHSU8/xBkrpubrM2M6Xi88aNBfaVJV6gE"
        [id] => stdClass Object
            (
                [kind] => youtube#video
                [videoId] => qmTDT92VIRc
            )

        [snippet] => stdClass Object
            (
                [publishedAt] => 2015-01-23T23:03:31.000Z
                [channelId] => UCZpKcVBccIjO9n0RXx3ZGFg
                [title] => PEOPLE ARE AWESOME 2015 (UNBELIEVABLE)
                [description] => People are Awesome 2015 unbelievable talent and natural skills! Subscribe to NcCrullex for more people are awesome videos. Chris Samba Art Channel: ...
                [thumbnails] => stdClass Object
                    (
                        [default] => stdClass Object
                            (
                                [url] => https://i.ytimg.com/vi/qmTDT92VIRc/default.jpg
                            )

                        [medium] => stdClass Object
                            (
                                [url] => https://i.ytimg.com/vi/qmTDT92VIRc/mqdefault.jpg
                            )

                        [high] => stdClass Object
                            (
                                [url] => https://i.ytimg.com/vi/qmTDT92VIRc/hqdefault.jpg
                            )

                    )

                [channelTitle] => NcCrulleX
                [liveBroadcastContent] => none
            )

    )

As you can see most of the data that you might want is stored in the snippet item. Things like the title, description and URL to the thumbnails.

You might ask, how you would know if the item is a video, playlist or channel? You might have already noticed based on the results above. Its located under the id –> kind. It would have a kind of youtube#video if its a video. youtube#channel if its a channel and youtube#playlist if its a playlist. Don’t believe me? Try using the API to search for ‘the new boston’ and you’ll see.

If you only want to search for videos then you can use the searchVideos method. Just like the search method this takes up your query as its argument:

1
2
3
<?php
$results = $youtube->searchVideos('Ninja');
?>

If you only want to get videos from a specific channel, you can do it in 2 calls. First you need to get the channel id by using the getChannelByName method and then extract the id from the result that you get and then use the id for the searchChannelVideos to search for videos in a specific channel:

1
2
3
4
<?php
$channel = $youtube->getChannelByName('thenewboston');
$results = $youtube->searchChannelVideos('ruby', $channel->id);
?>

The code above would return the first page of results for the ‘ruby’ videos in ‘thenewboston’ channel.

If you only want to return playlists on a specific channel, you can do:

1
2
3
4
<?php
$channel = $youtube->getChannelByName('thenewboston');
$results = $youtube->getPlaylistsByChannelId($channel->id);
?>

If you want to get the items in a playlist, you can do it in 3 calls:

1
2
3
4
5
<?php
$channel = $youtube->getChannelByName('thenewboston');
$playlists = $youtube->getPlaylistsByChannelId($channel->id);
$playlist_items = $youtube->getPlaylistItemsByPlaylistId($playlists[0]->id);
?>

If you want to be more liberal with your search, you can use the searchAdvanced method:

1
2
3
4
5
6
7
<?php
$results = $youtube->searchAdvanced(array(
    'q' => 'fruits',
    'part' => 'snippet',
    'order' => 'rating'
));
?>

Here’s a breakdown of the parameters we’ve just used:

  • q – your query
  • part – the part of the result which you want to get. Earlier in the sample result we saw that there are only 2 parts. id and snippet. This parameter allows you to specify that. If you only need the video, playlist or channel id then supply id as the part. If you need the full details then use snippet. If you need both then you can use a comma-separated list: id, snippet.
  • order – the basis of the ordering. In the example we used rating. This orders the results based on the highest ratings to the lowest. Not really sure what the ratings is. But the first thing that comes to mind is the number of likes in the video. You can also use viewCount if you want. This will order the results with the videos, playlists or channels which has the highest number of views to the lowest.
  • type – the type of item. This can either be video, playlist, or channel.

There’s a whole bunch more which you can specify as a parameter. Be sure to check out the search reference.

Pagination

You can also paginate results. First you need to make an initial request so you can get the nextPageToken. Then check if the page token exists, if it does then add a pageToken item to the parameters that you supplied earlier. And then make another request. Since we supplied the nextPageToken, this will now navigate to the second page of the same result set. By default the youtube data api only returns 10 rows per request. This means that the second page will show you row 11 up to 21.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<?php
$params = array(
    'q' => 'Ruby',
    'type' => 'video',
    'part' => 'id, snippet',
    'maxResults' => 100
);

$search = $youtube->searchAdvanced($params, true);

//check for a page token
if(isset($search['info']['nextPageToken'])){
    $params['pageToken'] = $search['info']['nextPageToken'];
}

//make another request with the page token added
$search = $youtube->searchAdvanced($params, true);

//do something with the search results her
?>         

You can also use the paginateResults method to implement pagination. Just like the method above, we need to make an initial request to get the nextPageToken. We then store it to an array so we can navigate through the results easily. The paginateResults method takes up the original search parameters as its first argument and the page token as its second. So all you have to do is supply the nextPageToken that you got from the previous results as the second argument for the paginateResults method to navigate to the next page. Note that in the example below, the indexes for the $page_tokens are just hard-coded. You will have to implement the generation of pagination links yourself and then use their index when navigating through the results. Also note that the results aren’t cached, this means that whenever you paginate through the results a new request is made to the youtube data api. You will also need to implement caching if you don’t want to easily run out of requests you can make.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
<?php
//your search parameters
$params = array(
    'q' => 'Python',
    'type' => 'video',
    'part' => 'id, snippet',
    'maxResults' => 100
);

//array for storing page tokens
$page_tokens = array();

//make initial request
$search = $youtube->paginateResults($params, null);

//store page token
$page_tokens[] = $search['info']['nextPageToken'];

//navigate to the next page
$search = $youtube->paginateResults($params, $page_tokens[0]);

//store page token token
$page_tokens[] = $search['info']['nextPageToken'];

//navigate to the next page
$search = $youtube->paginateResults($params, $page_tokens[1]);

//store page token
$page_tokens[] = $search['info']['nextPageToken'];

//navigate to the previous page
$search = $youtube->paginateResults($params, $page_tokens[0]);

//do something with the search results here
?>

Conclusion

That’s it! In this tutorial you’ve learned how to work with the Youtube Data API in PHP. You’ve learned how to get the info of a specific video, get general details about videos in a specific channel, get the videos in a specific playlist, and also search for videos, playlists and channels using a query. Don’t forget to work through the API request limits though. The limit information can be found on the Youtube Data API page on your Google Console.

Resources

Creating a Chrome Extension

| Comments

In this tutorial I’ll be showing you how to create a very basic chrome extension. One that would allow us to schedule posts with the Ahead project that I created. Here’s how it will work:

  1. User clicks on the extension on a page that he wants to share on a future time.
  2. The extension makes a request to the server where Ahead is currently hosted.
  3. The server returns a response and it is then outputted by the extension.

Creating the Extension

Before anything else we need to create the manifest.json file. This is the most important file since chrome won’t be able to recognize our extension if we do not have this file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{
  "manifest_version": 2,
  "name": "Ahead",
  "version": "1.0",
  "description": "Easily schedule posts",

  "browser_action": {
    "default_icon": "icon.png"
  },

  "background": {
    "scripts": ["background.js"]
  },

  "content_scripts": 
    [
        {
            "matches":["<all_urls>"],
            "js":["content.js"],
            "run_at": "document_end"
        }
    ],
  
  "permissions": ["<all_urls>", "storage"],
  "options_page": "options.html"
}

Breaking it down:

  • manifest_version – this is the version of the manifest file. The Chrome browser has been around for quite a while now. So are the extensions that have been written when it first came out. Currently the latest version that we can assign to a manifest file is 2.

  • name – the name you want to give to the extension.

  • version – the version of the extension.
  • description – a descriptive text you want to show your users. This is the text that will show right under the name of the extension when the user accesses the chrome://extensions page.
  • browser_action – used to specify the element which will trigger the extension. In this case we want an icon to be the trigger so we set the default_icon. The value would be the filename of the icon.
  • content_scripts – these are the scripts that run in the context of the current web page. The matches property is where you specify an array of URL’s where the content scripts can run. In this case we just set a special value called "<all urls>". This way the script can run from any webpage. Next is the js property where we specify an array of items containing the path to the content scripts. Last is the run_at property where we specify when to run the content scripts. We just set it to document_end so we can make sure that the whole page is loaded before we execute our script.
  • background – used to specify the background scripts. Content scripts only has access to the elements in the current page but not the Chrome API methods. So we need a background script in order to access those methods. This property simply takes up a single property called scripts where you specify an array of the background scripts you wish to use. In thise case were just going to use a single background.js file.
  • permissions – this is where we can specify an array containing the list of items that the extension needs to use or has access in. In this case were just going to use "<all_urls>" and storage. We use storage to have access to the methods used for saving custom settings for the extension. In our case the setting would be the api key required by Ahead.
  • options_page – used for specifying which HTML file will be used for the options page.

Next let’s proceed with the options page:

1
2
3
4
5
6
7
8
9
10
11
12
13
<!DOCTYPE html>
<html>
<head><title>Ahead</title></head>
<body>

    API Key:
    <input type="text" id="api_key">

    <button id="save">Save</button>

    <script src="options.js"></script>
</body>
</html>

You can use css just like you would in a normal HTML page if you want. But for this tutorial we won’t. The options page is pretty minimal. All we need is the actual field, a button to save the settings and then a link to the options page JavaScript file.

Here’s the options.js file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
function save_options(){
  var api_key = document.getElementById('api_key').value;

  chrome.storage.sync.set({
    'api_key': api_key
  },
  function(){
    alert('API Key Saved!');
  });
}


function restore_options(){

  chrome.storage.sync.get({
    'api_key': ''
  },
  function(items){
    document.getElementById('api_key').value = items.api_key;
  });
}
document.addEventListener('DOMContentLoaded', restore_options);
document.getElementById('save').addEventListener('click',
    save_options);

In the above file we declared 2 methods. save_options and restore_options. save_options is used for saving the settings to chrome storage. And restore_options is for retrieving the settings from the storage and populating the value for each of the fields. In the options.js file we got access to the chrome storage API. The main methods that were using are the sync.set and sync.get. We use sync.set to save the settings in the chrome storage and then output an alert box saying the settings are saved when its successful. sync.get on the other hand is used for retrieving the existing setting from chrome storage and then we use the retrieved value to populate the text field. The save_options method is called when the save button is clicked. And the restore_options method is called when the DOM of the options page has been fully loaded.

Next is the background.js file. We primarily use this file to listen for the click event on the browser_action which is basically the icon of extension that is located on the upper right corner of Chrome:

1
2
3
4
5
6
7
chrome.browserAction.onClicked.addListener(function(tab){

  chrome.tabs.query({active: true, currentWindow: true}, function(tabs) {
    var activeTab = tabs[0];
    chrome.tabs.sendMessage(activeTab.id, {"message": "clicked_browser_action"});
  });
});

You don’t need to worry about the code above too much. All it does is listen for the click event on the icon of the extension. It then uses the tabs.sendMessage method to send a message to the current tab that hey the icon extension has been clicked. This then brings us to the content.js file which basically just waits for this message to be sent. Once it receives the message we then retrieve the api key using the sync.get method. Once we retrieved the api key we make a POST request to the Ahead URL which is responsible for accepting POST requests for posts to be published. The content would be the title of the current page and then its URL. We then construct a new form data and supply the queue, api_key and content as the fields. We set the queue to true because we want to schedule the post to be published later. If you set it to false then it will be published immediately. Next is the api_key. We simply supply what we got from chrome storage as the value. And last is the content. We then send this form data to the Ahead URL. Finally we listen for the onload event on the request. This event is fired up whenever the request is successful. All we have to do is parse the response since its a JSON string. We then alert the value for the text property. Which is basically just a message saying that the post was scheduled and when it will be published. If we do get an error, the onerror event is fired and we simply tell the user to try again by using an alert.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
chrome.runtime.onMessage.addListener(
  function(request, sender, sendResponse){

    chrome.storage.sync.get({
        'api_key': ''
    },
    function(items){
        var api_key = items.api_key;

        var http_request = new XMLHttpRequest();
        http_request.open('POST', 'http://ec2-54-68-251-216.us-west-2.compute.amazonaws.com/api/post', true);
        var content = document.title + ' ' + window.location.href;
        var form_data = new FormData();
        form_data.append('queue', true);
        form_data.append('api_key', api_key);
        form_data.append('content', content);
        http_request.send(form_data);

        http_request.onload = function(){
            if(http_request.status >= 200 && http_request.status < 400){
              var response_data = JSON.parse(http_request.responseText);
              alert(response_data.text);
            }
        };


        http_request.onerror = function() {
            alert('Something went wrong while trying to post. Please try again');
        };
    });


  }
);

Installing the Extension

Now were ready to actually install the extension. You can do that by enabling developer mode on the chrome extensions page:

1
chrome://extensions/

This will show you 3 new buttons: load unpacked extension, pack extension and update extensions now. All we need is the first one. Click on it then select the folder that contains the manifest.json file on its root directory. Chrome will then list it as one of the available extensions:

extensions

Once its loaded, click on the ‘options’ link to access the options page. From there add the api key which you can get from the Ahead website.

At this point all of the new tabs that you open or the existing tabs which you reload would be useable with the extension. Just click on the extension icon and it will schedule a post using the title of the page and its URL as the content.

Conclusion

That’s it! In this tutorial you’ve learned the basics of how to create a chrome extension. You’ve learned how to listen for the click event on the extension icon, how to add an options page and how to get the details from the current page.

Getting Started With Lumen

| Comments

In this tutorial I’ll walk you through Lumen, a lightweight framework from the same guys that made Laravel. Lumen is basically a lighter version of Laravel.

Installation

You can install Lumen by using composer’s create-project command. Simply execute the following command on your preferred install directory:

1
composer create-project laravel/lumen --prefer-dist

Once the installation is done, you can navigate to the lumen directory and execute the following:

1
php artisan serve --port=7771

This will serve the project on port 7771 of your localhost:

1
http://localhost:7771/

If the installation completed successfully, you will be greeted by the default screen.

Using Third Party Libraries

You can use third party libraries with Lumen by adding the package that you want to install in the composer.json file. Here’s an example:

1
2
3
4
5
6
"require": {
    "laravel/lumen-framework": "5.0.*",
    "vlucas/phpdotenv": "~1.0",
    "elasticsearch/elasticsearch": "~1.0",
    "guzzlehttp/guzzle": "~5.0"
},

Note that the lumen-framework and phpdotenv is there by default since those are needed in order for Lumen to work. In the above file we have added elasticsearch and guzzlehttp as our dependencies.

You can then make Lumen aware of these libraries by initializing them on the files where you want to use them:

1
2
3
4
<?php
$http_client = new \GuzzleHttp\Client();
$es_client = new \Elasticsearch\Client();
?>

Configuration

By default Lumen is pretty bare-bones. Which means that we need to do some configuration if we want to use some of the features that we usually have in Laravel. In Lumen you can enable most of those functionalities by editing the bootstrap/app.php file.

Enabling Sessions

You can enable sessions by removing the comment on the middleware which says Illuminate\Session\Middleware\StartSession:

1
2
3
4
5
6
7
8
9
<?php
$app->middleware([
    //'Illuminate\Cookie\Middleware\EncryptCookies',
    //'Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse',
    'Illuminate\Session\Middleware\StartSession',
    //'Illuminate\View\Middleware\ShareErrorsFromSession',
    //'Laravel\Lumen\Http\Middleware\VerifyCsrfToken',
]);
?>

Enabling Eloquent

If you need to use Eloquent in your app, you can enable it by removing the comment on the following lines:

1
2
3
4
<?php
$app->withFacades();
$app->withEloquent();
?>

Dot Env

Lumen uses a .env file to set the environment configuration for the project. This way you can have a different .env file on your local machine and on your server. And then you can set git to ignore this file so that it doesn’t get pushed along to the server when you deploy your changes. Here’s how the .env file looks by default:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
APP_ENV=local
APP_DEBUG=false
APP_KEY=SomeRandomKey!!!

APP_LOCALE=en
APP_FALLBACK_LOCALE=en

DB_CONNECTION=mysql
DB_HOST=localhost
DB_DATABASE=homestead
DB_USERNAME=homestead
DB_PASSWORD=secret

CACHE_DRIVER=memcached
SESSION_DRIVER=memcached
QUEUE_DRIVER=database

As you can see from the file above, you can set the name of the environment by setting the value for APP_ENV. The next one right after that is the APP_DEBUG configuration which is set to false by default. If you’re developing you need to set this to true so you have an idea what’s wrong when testing your app. Next is APP_KEY which is basically used as a salt for sessions. You can use a random string generator for this. APP_LOCALE and APP_FALLBACK_LOCALE are used for setting the language of your app. This is set to english by default. Next are the database configuration. Anything which starts with DB_ is the database configuration. By default its expecting to connect to a mysql database. DB_HOST is the host in which the database is running. DB_DATABASE is the name of the database you want to connect to. DB_USERNAME is the username of the user you want to use for logging in. DB_PASSWORD is the password of the user. After the database configuration are the cache, session and queue driver configuration. The cache and session driver are using memcached by default so you’ll have to install memcached if you’re using caching and session functionalities. If memcached is not present in the system then it will just fallback to the default one which is the filesystem.

Note that before you can use the .env file, you need to uncomment the following line in your bootstrap/app.php file. This way Lumen will load the .env file on the root of your project.

1
Dotenv::load(__DIR__.'/../');

Directory Structure

Here’s what the default directory structure looks like in Lumen. The one’s with * are files:

1
2
3
4
5
6
7
8
9
10
11
app
bootstrap
database
public
resources
storage
tests
vendor
*artisan
*server.php
*composer.json

The app directory is where you will usually work with. This is where the routes, controllers and middlewares are stored.

The bootstrap directory only contains one file by default, the app.php file. As you have seen earlier, its where you can configure and add new functionality to Lumen.

The database directory is where the database migrations and seeders are stored. You use migrations to easily jump from previous database version to another. Its like version control for your database. Seeds on the other hand are used to populate the database with dummy data so that you can easily test your app without having to enter the information through the app itself.

The public directory is where your public assets are stored. Things like css, javascript and images are stored in this directory.

The resources directory is where you store the views that you use for your app.

The storage directory is where logs, sessions and cache files are stored.

The tests directory is where you put your test files.

The vendor directory is where the dependencies of your app is stored. This is where composer installs the packages that you specified in your composer.json file.

The artisan file is the file that is used for command line tasks for your project. We have used it earlier when we served the project. The artisan file can also be used to create migrations, seeds and other tasks that you usually perform through the command line.

The server.php file is used for serving the files without the use of a web server like Apache.

Routes

Routes are stored in the app/Http/routes.php file. Here’s how you would declare a route in Lumen:

1
2
3
4
5
<?php
$app->get('/', functionn(){
    return 'Hello World!';
});
?>

If you want to use a controller method to handle the response for a specific route then you can do something like this:

1
2
3
<?php
$app->get('/', 'App\Http\Controllers\HomeController@index');
?>

Then you would need to create a HomeController controller and then declare an index method. This will then be used to return a response.

Controllers

Controllers are stored in the app/Http/Controllers directory. Needless to say, the convention is one file per controller. Otherwise it would be really confusing. Here’s the basic structure of a controller:

1
2
3
4
5
6
7
8
9
10
<?php
<?php namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Laravel\Lumen\Routing\Controller as BaseController;

class HomeController extends BaseController{

}
?>

Note that we need to use Illuminate\Http\Request to be able to access the request parameters for each request. We also need to use Laravel\Lumen\Routing\Controller. This allows us to extend the functionality of the base controller class.

Views

Lumen still comes with blade templating, all you have to do is create your views under the resources/views directory and then use .blade.php as the file extension. Though unlike Laravel you return views this way:

1
2
3
4
5
<?php
public function index(){
    return view('index');
}
?>

In the example above were returning the index view that is stored in the root of the resources/views directory. If we want to return some data, then we can pass it by supplying the array or object that we want to pass:

1
2
3
4
5
6
7
8
<?php
$array = array(
    'name' => 'Ash Ketchum',
    'pokemon' => 'Pikachu'
);

return view('index', $array);
?>

It can then be rendered in the view like so:

1
2
3
4
5
6
7
8
9
10
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>test</title>
</head>
<body>
    Hi my name is , my Pokemon is 
</body>
</html>

Database

When working with a database you first need to edit the database configuration values in your .env file.

Migrations

Once that’s done you can try if your app can connect to your database by creating a database migration. You can do that by executing the following command in the root directory of your project:

1
php artisan migrate:install

The command above creates the migration table in your database. The migration table is used by Lumen to keep track of which database migrations are currently applied to your database. If that worked without problem and you see that a migrations table has been created in your database then you’re good to go.

Next you can create a new table by using the make:migration command. This takes up the action that you wish to do. In this case we want to create a new table so we use --create and then supply the name of the table as the value. The second argument will be the name that will be assigned to the migration class.

1
php artisan make:migration --create=users create_users_table

The command above will create a file which looks like the following in the database/migrations directory:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
<?php

use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;

class CreateUsersTable extends Migration {

    /**
     * Run the migrations.
     *
     * @return void
     */
    public function up()
    {
        Schema::create('users', function(Blueprint $table)
        {
            $table->increments('id');
            $table->timestamps();
        });
    }

    /**
     * Reverse the migrations.
     *
     * @return void
     */
    public function down()
    {
        Schema::drop('users');
    }

}
?>

The only thing that we need to edit here are the method calls inside the up method:

1
2
3
4
5
6
7
8
<?php
Schema::create('users', function(Blueprint $table)
{
    $table->increments('id');
    $table->string('name');
    $table->integer('age');
});
?>

That is where we specify the fields that we need to add to the users table.

Once you’re happy with the file, save it and then run:

1
php artisan migrate

This will create the table in your database and add a new row to the migrations table.

Seeds

You can create a new database seeder file inside the database/seeds directory. Here’s the usual structure of a seeder:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<?php

use Illuminate\Database\Seeder;

class UserTableSeeder extends Seeder
{
    public function run()
    {

        //seeding code       

    }
}
?>

Inside the run method is the actual seeding code. We can use your usual Laravel flavored database queries inside of it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<?php
DB::table('users')->insert(
    array(
        'name' => 'Ash Ketchum',
        'age' => 10
    )
);

 DB::table('users')->insert(
    array(
        'name' => 'Brock',
        'age' => 15
    )
);

DB::table('users')->insert(
    array(
        'name' => 'Misty',
        'age' => 12
    )
);
?>

Once that’s done, save the file and open up the DatabaseSeeder.php file. This is where you specify which seeders you want to execute whenever you execute the php artisan db:seed command. In this case we want to add the UserTableSeeder:

1
$this->call('UserTableSeeder');

Before we execute the php artisan db:seed command we will first need to reload the autoloaded files by executing the composer dump-autoload command. We need to do this every time we add a new seeder so that Lumen will take care of loading the seeder.

Getting Data

From your routes file you can now try fetching the users that we’ve added:

1
2
3
4
5
6
7
<?php
$app->get('/db-testing', function(){

    $users = DB::table('users')->get();
    return $users;
});
?>

With Lumen you can use the query builder, basic queries and even Eloquent. So if you already know how to work with those then you’re good to go.

Conclusion

That’s it! In this tutorial I’ve walked you through Lumen and how you can install, configure and work with the different functionalities that it can offer.

Implementing Audio Calls With PeerJS

| Comments

These past few days I’ve been playing around with WebRTC. For the uninitiated, WebRTC basically means Web Real Time Communication. Things like chat, audio or video calling comes to mind when you say real time. And that is what really WebRTC is. It gives real time super powers for the web. In this tutorial I’ll be showing you how to implement audio calls with PeerJS. PeerJS is a JavaScript library that allows us to easily implement peer to peer communications with WebRTC.

Things We Need

Before we start, go ahead and download the things we’ll need for this tutorial:

  • jQuery – I know right! who still uses jQuery these days? Raise your left foot. Kidding aside, yes we still need jQuery. In this tutorial we’ll only be using it to handle click events. So if you’re confident with your Vanilla JavaScript-Fu then feel free to skip it.

  • PeerJS – In case you missed it earlier, were gonna need PeerJS so that we can easily implement WebRTC.

  • RecordRTC.js – This library mainly provides recording functionalities (e.g taking screenshots and webcam photos, recording audio and video) but it also doubles as a shim provider. We won’t really use the recording functionalities in this tutorial so were only using it to be able to request the use of the microphone in the device.

Overview of the App

Were going to build an app that would allow 2 users to call each other through the web via WebRTC. This app can use the PeerServer Cloud or you can implement your own PeerJS server. As for the outputting the audio coming from the microphones of each peer, we will use HTML5 Audio. So all we have to do is convert the audio stream to a format that HTML5 Audio can understand so that we can have each of the users listen to the audio coming from the other side.

Building the App

Now that have a basic overview of how the app will work, its time to actually build it.

First, link all the things that we’ll need:

1
2
3
4
5
6
7
8
9
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>test</title>
    <script src="//cdn.peerjs.com/0.3/peer.min.js"></script>
    <script src="//cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
    <script src="//www.WebRTC-Experiment.com/RecordRTC.js"></script>
</head>

Yes, you can also put those script tags right before the closing body tag if performance is your thing.

Next is the HTML that the user will actually see:

1
2
3
<body>
    <button id="start-call">start call</button>
    <audio controls></audio>

Yup! I didn’t miss anything. That’s all we need. A button to start the call to another peer and an HTML5 audio tag to output the audio on the other end.

Now let’s proceed with the JavaScript. First declare a method that will get the query parameters by name.

1
2
3
4
5
6
function getParameterByName(name){
    name = name.replace(/[\[]/, "\\[").replace(/[\]]/, "\\]");
    var regex = new RegExp("[\\?&]" + name + "=([^&#]*)"),
        results = regex.exec(location.search);
    return results === null ? "" : decodeURIComponent(results[1].replace(/\+/g, " "));
}

The way this app works is by using from and to as query parameters. Where from is the id that you want to give to the peer whose currently using the device and to is the id of the peer on the other side. So we use the method above to easily get those values. To emphasize further, here’s how the URL that we will use to access the app will look like on our side (john):

1
http://mysite.com/call-app.html?from=john&to=jane

And on the other side (jane), it would look like this:

1
http://mysite.com/call-app.html?from=jane&to=john

We’ve basically just interchanged the two peers so we know exactly where the request is coming from and where its going to.

Next we declare the method that will ask a permission to the user for the page to use the microphone. This method takes up 2 parameters, the successCallback and the errorCallback. The successCallback is called when the page has been granted permission to use the microphone. And the errorCallback is called when the user declined.

1
2
3
4
5
6
function getAudio(successCallback, errorCallback){
    navigator.getUserMedia({
        audio: true,
        video: false
    }, successCallback, errorCallback);
}

Next declare the method that will be called when a call is received from a peer. This method has the call object as its parameter. We use this call object to initiate an answer to the call. But first we need to ask permission to the user to use the microphone by calling the getAudio method. Once we get the permission, we can then answer the call by calling the answer method in the call object. This method takes up the MediaStream as its argument. If we didn’t get the permission to use the microphone, we just log that an error occurred and then output the actual error. Finally, we listen to the stream event in the call and then we call the onReceiveStream method when the event happens. This stream event can be triggered in 2 ways. First is when a peer initiates a call to another peer. And the second is when the other peer actually answers the call.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
function onReceiveCall(call){

    console.log('peer is calling...');
    console.log(call);

    getAudio(
        function(MediaStream){
            call.answer(MediaStream);
            console.log('answering call started...');
        },
        function(err){
            console.log('an error occured while getting the audio');
            console.log(err);
        }
    );

    call.on('stream', onReceiveStream);
}

Next is the onReceiveStream method. This method is called when a media stream is received from the other peer. This is where we convert the media stream to a URL which we use as the source for the audio tag. The stream is basically an object which contains the current audio data. And we convert it to a URL by using the window.URL.createObjectURL method. Once all the meta data is loaded, we then play the audio.

1
2
3
4
5
6
7
8
function onReceiveStream(stream){
    var audio = document.querySelector('audio');
    audio.src = window.URL.createObjectURL(stream);
    audio.onloadedmetadata = function(e){
        console.log('now playing the audio');
        audio.play();
    }
}

Now that were done with all the method declarations, its time to actually call them. First we need to know where the request is coming from and who will it be sent to.

1
2
var from = getParameterByName('from');
var to = getParameterByName('to');

Next we declare a new peer. This takes up the id of the peer as its first argument and the second argument is an object containing the PeerJS key. If you do not have a key yet, you can register for the PeerJS Cloud Service. Its free for up to 50 concurrent connections. After that, we also need to set the ice server config. This ensures that we can get the peers to connect to each other without having to worry about external IP’s assigned by routers, firewalls, proxies and other kinds of network security which can get in the way. You need to have at least one stun server and one turn server configuration added. You can get a list of freely available stun and turn servers here.

1
2
3
4
5
6
7
8
9
10
var peer = new Peer(
    from,
    {
        key: 'Your PeerJS API Key',
        config: {'iceServers': [
            { url: 'stun:stun1.l.google.com:19302' },
            { url: 'turn:numb.viagenie.ca', credential: 'muazkh', username: 'webrtc@live.com' }
        ]}
    }
);

If you want to use your own server and get through the 50 concurrent connections limit of the PeerServer cloud. You can install PeerJS Server on your existing Express app in node.

1
npm install peer --save

And then use it like so:

1
2
3
4
5
6
7
8
9
10
11
var express = require('express');
var express_peer_server = require('peer').ExpressPeerServer;
var peer_options = {
    debug: true
};

var app = express();

var server = app.listen(3000);

app.use('/peerjs', express_peer_server(server, peer_options));

And from the client side you can now use your shiny new PeerJS server:

1
2
3
4
5
6
7
8
var peer = new Peer(from, {
        host: 'your-peerjs-server.com', port: 3000, path: '/peerjs',
        config: {'iceServers': [
            { url: 'stun:stun1.l.google.com:19302' },
            { url: 'turn:numb.viagenie.ca', credential: 'muazkh', username: 'webrtc@live.com' }
        ]}
    }
);

Next is an optional code. We only use it to determine if the peer we created was actually created. Here we simply listen to the open event on the peer object. And once it happens, we just output the peer id.

1
2
3
peer.on('open', function(id){
    console.log('My peer ID is: ' + id);
});

Next we listen to the call event. This is triggered when a peer tries to make call to the current user.

1
peer.on('call', onReceiveCall);

Finally, here’s the code we use when we initiate the call ourselves:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$('#start-call').click(function(){

    console.log('starting call...');

    getAudio(
        function(MediaStream){

            console.log('now calling ' + to);
            var call = peer.call(to, MediaStream);
            call.on('stream', onReceiveStream);
        },
        function(err){
            console.log('an error occured while getting the audio');
            console.log(err);
        }
    );

});

What this does is listen to the click event on the start-call button. It then calls the getAudio method to ask the user for permission to use the microphone. If the user allows then the call is made to the peer using the call method. This takes up the id of the peer on the other side and the MediaStream. Next, we just listen for the stream event and then call the onReceiveStream method if it happens. Note that this stream would be the audio stream from the peer on the other side and not the audio stream of the current user. Otherwise we would also hear our own voice. The same is true with the stream that were getting from the onReceiveCall method.

Conclusion

That’s it! In this tutorial we’ve learned how to implement audio calls with WebRTC and PeerJS. Be sure to check out the resources below if you want to learn more.

Resources

Getting Started With CouchDB in Node.js

| Comments

In this tutorial I’m going to walk you through how to get started with CouchDB in Node.js. But first here’s some background on what CouchDB is. CouchDB is a NoSQL database from the Apache Foundation. Just like any other NoSQL database out there. It uses JSON to store data and it deals with separate documents instead of tables and fields.

Installing CouchDB

You can install CouchDB by executing the following command:

1
sudo apt-get install couchdb

Once that’s done you can test if its successfully installed by accessing http://localhost:5984/ from your browser. You’re good to go if it returns a response similar to the following:

1
{"couchdb":"Welcome","uuid":"0eb12dd741b22a919c8701dd6dc14087","version":"1.5.0","vendor":{"version":"14.04","name":"Ubuntu"}}

Futon

If you’re from the RDBMS land. You might be familiar with Phpmyadmin. In CouchDB Futon is the equivalent of Phpmyadmin. It allows you to manage your CouchDB databases with ease. In case you’re wondering what Futon means, its basically a Japanese word. Futon is a traditional japanese bedding.

Ok enough with the trivia. You can access Futon by going to http://localhost:5984/_utils/. It should show you something similar to the following:

futon

The first thing you need to do is to configure futon so that it has an admin user. Because by default, every user who has access to it have admin privileges. It can only be accessed from the local computer so this isn’t really a security issue. Not unless a hacker gets to access the server. You can setup an admin by going into the configuration page. Just click ‘Configuration’ under the tools menu to get there. Next click on the ‘setup admin’ link found at the bottom right corner. This should open up a modal that asks you to enter the username and password that you can use for logging in as admin.

Just enter your desired username and password and then click ‘create’ to create the admin. You can now login as an admin by clicking on the ‘login’ link. Non-admin users will only now have read privileges once you have setup your first admin user.

With Futon you can create a new database, add documents, update documents, delete documents and delete a database. Using Futon is pretty straightforward so I’m just going to leave it to you to explore it.

Creating a Database

You can create a new database via Futon. From the Futon index page, click on the ‘create database’ link to create a new database. This will create a new database where you can add new documents.

Adding New Documents

You can add new documents by making a curl request to port 5984 of your localhost. Here’s an example:

1
curl -X POST http://127.0.0.1:5984/test_db/ -d '{"name": "Ash Ketchum", "age": 10, "type": "trainer"}' -H "Content-Type: application/json"

Here’s a breakdown of the options we have passed to curl:

  • -X POST http://127.0.0.1:5984/test/ – the -X option is used to specify the type of request and the host. In this case the host is the URL in which CouchDB is running. And the type of request is POST
  • -d '{"name": "Ash Ketchum", "age": 10, "type": "trainer"}'-d is used for specifying the data that you want to submit. In this case were using a JSON string to represent the data. Note that there are no fields that are required by CouchDB. But its helpful to specify a type field so that we can easily query documents later on based on their type.
  • -H "Content-Type: application/json"-H is used for specifying the header type.

Executing the command above will return something similar to the following:

1
2
3
4
5
{
    "ok":true,
    "id":"cc6b37f1e6b2215f2a5ccac38c000a43",
    "rev":"1-61280846062dcdb986c5a6c4aa9aaf03"
}

Usually this is the status of the request (ok), the id assigned to the document (id), and the revision number (rev).

Retrieving Documents

You can retrieve all documents from a specific database by using a GET request:

1
curl -X GET http://127.0.0.1:5984/test_db/_all_docs 

This returns the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
{
    "total_rows":1,
    "offset":0,
    "rows":[
        {
            "id":"cc6b37f1e6b2215f2a5ccac38c000a43",
            "key":"cc6b37f1e6b2215f2a5ccac38c000a43",
            "value":{
                "rev":"1-61280846062dcdb986c5a6c4aa9aaf03"
            }
        }
    ]
}

Note that this only returns the id, key and value of the document and not the actual contents of the documents. If you also need to return the contents, just add the include_docs as a query parameter and set its value to true:

1
curl -X GET http://127.0.0.1:5984/test_db/_all_docs?include_docs=true

If you want to retrieve a specific document, use the document id:

1
curl -X GET http://127.0.0.1:5984/test_db/cc6b37f1e6b2215f2a5ccac38c000a43

If you want to retrieve a specific revision, you can supply rev as a query parameter and then use the revision id as the value.

1
curl -X GET http://127.0.0.1:5984/test/cc6b37f1e6b2215f2a5ccac38c000a43?rev=1-61280846062dcdb986c5a6c4aa9aaf03

Updating Documents

You can update documents by using the document id and the revision id. All you have to do is make a PUT request to the database that you want to update and add the document id as a path. And then supply the updated data along with the revision that you want to update:

1
curl -X PUT http://127.0.0.1:5984/test_db/cc6b37f1e6b2215f2a5ccac38c000a43 -d '{"_rev": "1-61280846062dcdb986c5a6c4aa9aaf03", "name": "Ash Ketchum", "age": 12, "type": "trainer"}' -H "Content-Type: application/json"

It should return something similar to the following if the update was successful:

1
2
3
4
5
{
    "ok":true,
    "id":"cc6b37f1e6b2215f2a5ccac38c000a43",
    "rev":"2-0023f19d7d3097468a8eeec014018840"
}

Revisions is an important feature that comes with CouchDB. Its like a built-in version control for each document. You can always go back to a previous version of a specific document as long as you haven’t deleted it.

Deleting Documents

You can delete a document by using the same path as updating documents or when you’re retrieving them. The only difference is you need to use a DELETE request and supply the revision id as a query parameter:

1
curl -X DELETE http://127.0.0.1:5984/test_db/cc6b37f1e6b2215f2a5ccac38c000a43?rev=2-0023f19d7d3097468a8eeec014018840

This deletes the second revision of the document. If you check the document from Futon, you will no longer see it there. But you will still be able to get a specific revision which haven’t been deleted if you supply the previous revision id in the request for getting a specific document:

1
curl -X GET http://127.0.0.1:5984/test_db/cc6b37f1e6b2215f2a5ccac38c000a43?rev=1-61280846062dcdb986c5a6c4aa9aaf03

Backup and Restore

Unlike Phpmyadmin, Futon doesn’t come with backup and restore capabilities. Good thing we have this awesome guy who created a backup and restore utility for CouchDB. Just download the couchdb-backup.sh file from the Github repo and place it somewhere in your computer.

To backup a specific database, just use the bash command and supply the filename of the backup utility. You supply the -b option if you want to backup and -r if you want to restore. -H is the host, if you don’t supply the port it uses 5984 by default. -d is the name of the database. -f is the filename of the backup file that will be created. -u is the admin username that you use for logging in to Futon. And -p is the password:

1
bash couchdb-backup.sh -b -H 127.0.0.1 -d test_db -f test_db.json -u your_username -p your_password

To restore the backup, just supply the -r option instead of -b:

1
bash couchdb-backup.sh -r -H 127.0.0.1 -d test_db -f test_db.json -u your_username -p your_password

Views

Views are used to query the database for a specific data. If you’re coming from the RDBMS land, you usually select specific data using the SELECT command. And then you use WHERE to get what you want. Once you’re done, you call it a day. With CouchDB its different. Because it doesn’t come with functions that allows you to select specific data easily. In CouchDB we need to use views. A view is basically just a JavaScript function that emits the documents that you need.

Before we move on with working with views, you can add the following document to your CouchDB database if you want to follow along:

1
2
3
4
5
6
7
{"new_edits":false,"docs":[
{"_id":"cc6b37f1e6b2215f2a5ccac38c000e58","_rev":"1-cbc1dd4e0dd53b3f9770bb8edc30ae33","name":"pikachu","type":"electric","trainer":"ash","gender":"m"},
{"_id":"cc6b37f1e6b2215f2a5ccac38c001e2c","_rev":"2-fbe6131ea1248b83301900a5954dec6d","name":"squirtle","type":"water","trainer":"ash","gender":"m"},
{"_id":"cc6b37f1e6b2215f2a5ccac38c0020d9","_rev":"1-8f98424393470486d60cf5fff00f33d3","name":"starmie","type":"water","trainer":"misty","gender":"f"},
{"_id":"cc6b37f1e6b2215f2a5ccac38c00215e","_rev":"1-aac04234d60216760bd9e3f89fa602e9","name":"geodude","type":"rock","trainer":"brock","gender":"m"},
{"_id":"cc6b37f1e6b2215f2a5ccac38c0030b4","_rev":"1-280586eb35fc3bde31f88ec9913f3dcb","name":"onix","type":"rock","trainer":"brock","gender":"m"}
]}

What you see above is a backup file which you can restore by using the backup and restore utility which I introduced earlier.

Creating a View

You can create a view by selecting your database from Futon. From there, look for the view dropdown box and then select ‘temporary view…’. This allows you to test and create a view. Enter the following in the view code box:

1
2
3
function(doc) {
   emit(doc.type, null);
}

Click on ‘run’ to run it. This will list all of the documents in the database using the type field as its key. We have set the value to null because we don’t need it. The value can be set to doc and then the value that’s returned will be the actual contents of the document. You can do that but its not really good practice since it consumes a lot of memory. Once you see some output you can now click on ‘save as’ and then supply the name of the design document and the view name. You can name those with any name you want but its good practice to give the design document a name which represents the type of document. In this case its ‘pokemon’. And the view name would be the key that you use. Some folks usually prefix it with by_. I also prefer it so I’ll name the view ‘by_type’. Click on ‘save’ once you’re done giving the names.

Here’s how you can use the view:

1
curl "http://127.0.0.1:5984/test_db/_design/pokemon/_view/by_type?key=%22water%22"

Breaking it down, the first part of the URL is the host where CouchDB is running:

1
http://127.0.0.1:5984

Next is the database:

1
test_db

And then you specify the name of the design document by supplying _design followed by the name of the design document:

1
_design/pokemon

Next you also need to specify the view:

1
_view/by_type

And then lastly, your query:

1
key=%22water%22"

Note that you need to supply a URL encoded query. %22 represents double-quotes so were wrapping the actual query with %22 instead of double-quotes. Executing it would return the following. Basically the same as what you seen in Futon but this time its filtered according to the value you supplied as the key:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
    "total_rows":5,
    "offset":3,
    "rows":[
        {
            "id":"cc6b37f1e6b2215f2a5ccac38c001e2c",
            "key":"water",
            "value":null
        },
        {
            "id":"cc6b37f1e6b2215f2a5ccac38c0020d9",
            "key":"water",
            "value":null
        }
    ]
}

So the idea of views is that you have to emit the value for the field that you want to perform your query on. In this case we have emitted the type field.

Working with Node.js

You can work with CouchDB using the Nano package. You can install it in your project by executing the following command:

1
npm install nano --save

To use nano, create a new JavaScript file and name it app.js. Then you can connect to CouchDB by adding the following code:

1
var nano = require('nano')('http://localhost:5984');

If you already have a specific database to work with, you can connect to it by using the db.use method and then supply the name of the database as the argument:

1
var test_db = nano.db.use('test_db');

Creating New Documents

You can create new documents by using the insert method:

1
2
3
4
5
6
7
8
9
10
11
var data = { 
    name: 'pikachu', 
    skills: ['thunder bolt', 'iron tail', 'quick attack', 'mega punch'], 
    type: 'electric' 
};

test_db.insert(data, 'unique_id', function(err, body){
  if(!err){
    //awesome
  }
});

The insert method takes up the data that you want to save as its first argument, the id as its second argument and the third is the function that will be called once it gets a response. Note that the id is optional, so you can choose to supply a value or not. If you didn’t supply a value for it then CouchDB will automatically generate a unique id for you.

Retrieving Documents

Views are still utilized when retrieving specific documents from CouchDB in Nano. The view method is used for specifying which view you want to use. This method takes the name of the design document as its first argument, the name of the view as its second and then the query parameters that you want to pass in as the third argument. The fourth argument is the function that you want to execute once a response has been received:

1
2
3
4
5
6
7
8
9
var type = 'water';
db.view('pokemon', 'by_type', {'key': type, 'include_docs': true}, function(err, body){
    
    if(!err){
        var rows = body.rows; //the rows returned
    }
    
    }
);

Updating Documents

Nano doesn’t come with an update method by default. That is why we need to define a custom method that would do it for us. Declare the following near the top of your app.js file, right after your database connection code.

1
2
3
4
5
6
7
test_db.update = function(obj, key, callback){
 var db = this;
 db.get(key, function (error, existing){ 
    if(!error) obj._rev = existing._rev;
    db.insert(obj, key, callback);
 });
}

You can then use the update method in your code:

1
2
3
4
5
6
db.update(doc, doc_id, function(err, res){
    if(!err){
        //document has been updated
    }

});

Note that you need the id of the document when performing an update. That’s why you first need to create a view that would emit a unique field as the key and the document id as the value. In this case the unique field is the name. Each Pokemon has a unique name so this works:

1
2
3
function(doc) {
   emit(doc.name, doc._id);
}

Just give this view a design name of ‘pokemon’ and a name of ‘by_name’. And then you can use this view to update a Pokemon by name. All you have to do is call the update method once you have retrieved the id and the current document.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var name = 'pikachu';
db.view('pokemon', 'by_name', {'key': name, 'include_docs': true}, function(select_err, select_body){
    if(!select_err){
        var doc_id = select_body.rows[0].id;
        var doc = select_body.rows[0].doc;
        
        //do your updates here
        doc.age = 99; //you can add new fields or update existing ones

        db.update(doc, doc_id, function(err, res){
            if(!err){
                //document has been updated
            }

        });        
    }
});

Deleting Documents

If you no longer want a specific document and you need to delete it, you can use the destroy method. This takes up the id of the document as the first argument, the revision id of the revision that you want to delete as the second argument, and then the function that you want to execute once you get a response:

1
2
3
4
5
test_db.destroy(doc_id, revision_id, function(err, body) {
    if(!err){
        //done deleting
    }
});

Conclusion

That’s it! In this tutorial you’ve learned about the basics of using CouchDB through Futon, Curl and Node.js. We have barely scratch the surface with this tutorial. Do check out the resources below if you want to learn more.

Resources

Getting Started With the Yahoo Finance API

| Comments

The Yahoo Finance API provides a way for developers to get the latest information about the stock market. How the different stocks are doing. What’s the current buying price for a single stock. How much is the difference of the current market value to that of yesterday’s, etc.

First thing that you need to do is to install the Guzzle library for PHP. This allows us to easily make http requests to the server. You can do that by adding the following on your composer.json file:

1
2
3
4
5
 {
   "require": {
      "guzzlehttp/guzzle": "~5.0"
   }
}

Then execute composer install from your terminal.

Next create a test.php file and put the following code:

1
2
3
4
<?php
require 'vendor/autoload.php';
$client = new GuzzleHttp\Client();
?>

This allows us to use guzzle from our file.

Before we move on here are the specific data that you can get from the API:

Pricing

  • a – ask
  • b – bid
  • b2 – ask (realtime)
  • b3 – bid (realtime)
  • p – previous close
  • o – open

Dividends

  • y – dividend yield
  • d – dividend per share
  • r1 – dividend pay date
  • q – ex-dividend date

Date

  • c1 – change
  • c – change & percentage change
  • c6 – change (realtime)
  • k2 – change percent
  • p2 – change in percent
  • d1 – last trade date
  • d2 – trade date
  • t1 – last trade time

Averages

  • c8 – after hours change
  • c3 – commission
  • g – day’s low
  • h – day’s high
  • k1 – last trade (realtime) with time
  • l – last trade (with time)
  • l1 – last trade (price only)
  • t8 – 1 yr target price
  • m5 – change from 200 day moving average
  • m6 – percent change from 200 day moving average
  • m7 – change from 50 day moving average
  • m8 – percent change from 50 day moving average
  • m3 – 50 day moving average
  • m4 – 200 day moving average

Misc

  • w1 – day’s value change
  • w4 – day’s value change (realtime)
  • p1 – price paid
  • m – day’s range
  • m2 – day’s range (realtime)
  • g1 – holding gain percent
  • g3 – annualized gain
  • g4 – holdings gain
  • g5 – holdings gain percent (realtime)
  • g6 – holdings gain (realtime)
  • t7 – ticker trend
  • t6 – trade links
  • i5 – order book (realtime)
  • l2 – high limit
  • l3 – low limit
  • v1 – holdings value
  • v7 – holdings value (realtime)
  • s6 – revenue

52 Week Pricing

  • k – 52 week high
  • j – 52 week low
  • j5 – change from 52 week low
  • k4 – change from 52 week high
  • j6 – percent change from 52 week low
  • k5 – percent change from 52 week high
  • w – 52 week range

Symbol Info

  • v – more info
  • j1 – market capitalization
  • j3 – market cap (realtime)
  • f6 – float shares
  • n – name
  • n4 – notes
  • s – symbol
  • s1 – shares owned
  • x – stock exchange
  • j2 – shares outstanding

Volume

  • v – volume
  • a5 – ask size
  • b6 – bid size
  • k3 – last trade size
  • a2 – average daily volume

Ratios

  • e – earnings per share
  • e7 – eps estimate current year
  • e8 – eps estimate next year
  • e9 – eps estimate next quarter
  • b4 – book value
  • j4 – EBITDA
  • p5 – price / sales
  • p6 – price / book
  • r – P/E ratio
  • r2 – P/E ratio (realtime)
  • r5 – PEG ratio
  • r6 – price / eps estimate current year
  • r7 – price /eps estimate next year
  • s7 – short ratio

Wew! Ok so that’s a lot. I’ll let you catch your breath for a second. Ready?

Ok so now were ready to make a request to the API. You can either do that from here:

1
http://download.finance.yahoo.com/d/quotes.csv?s={SYMBOLS}&f={DATA THAT WE WANT}

Or here:

1
http://finance.yahoo.com/d/quotes.csv?s={SYMBOLS}&f={DATA THAT WE WANT}

Doesn’t really matter which. Both returns the same thing. Here’s an example which you can just copy and paste into your browser’s address bar:

1
http://finance.yahoo.com/d/quotes.csv?s=GOOGL&f=abo

Breaking it down. We make a request to this URL:

1
http://finance.yahoo.com/d/quotes.csv

And then we pass in some query parameters: s and f. s represents the symbol or symbols that you want to query. And f represents the data that you want. That’s the big list that we just went through earlier. So if you want the API to return the ask, bid and open. We just need to pass in:

1
f=abo

In the example that we have. Were requesting this information for the GOOGL symbol. Which is basically Google. When this is requested in the browser, it downloads a quotes.csv file which contain something similar to the following:

1
580.36,575.90,576.35

Its a comma-separated list of all the values you requested. So 580.36 is the ask price, 575.90 is the bidding price, and 576.35 is the open price.

If you want to query more than one symbol, you just separate each symbol with a comma. So for example you want to request the stock information about Google, Apple, Microsoft and Facebook:

1
http://finance.yahoo.com/d/quotes.csv?s=GOOGL,AAPL,MSFT,FB&f=abo

Now let’s proceed with actually making this all work with PHP. First we need to create a table that will store all the information that we need. In this case, we only need the symbol, ask, bid and open values:

1
2
3
4
5
6
7
CREATE TABLE symbols (
    id INT(6) UNSIGNED AUTO_INCREMENT PRIMARY KEY,
    symbol VARCHAR(30) NOT NULL,
    ask DOUBLE,
    bid DOUBLE,
    open DOUBLE
)

Next create an indexer.php file. What this file does is to query the yahoo finance api and then save the results to a csv file. Note that we can only query up to 200 symbols per request. So we’ll have to work based on that on our code.

The first thing that the code below does is to query the number of symbols currently in the database. And then we calculate how many times we need to loop in order to update all the symbols. We also need to declare the file path of the csv file in which will save all the results from the API. And initialize it by setting its value to an empty string. Then we declare the format sabo. Which means symbol, ask, bid and open. Next we create a for loop that will keep on executing until the value of $x reaches the total loop times that we got from dividing the total number of symbols by the API limit. Inside the loop we calculate the offset value by multiplying the current value of $x by the API limit. After that, we select the symbols that we need based on that. Then we loop through the results, specifically the symbol and then put them in an array. After looping through all the results, we convert the array into a comma separated list. This allows us to use this value for querying the API. Once we get the result back, we just save it to the csv file using file_put_contents.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
<?php
require 'vendor/autoload.php';
$db = new Mysqli(HOST, USER, PASS, DB);
$client = new GuzzleHttp\Client();

$symbols_count_result = $db->query("SELECT COUNT(id) FROM symbols");
$symbol_row = $symbols_count_result->fetch_row();
$symbol_count = $symbol_row[0];

$api_limit = 200;

$loop_times = $symbol_count / $api_limit;
$loop_times = floor($loop_times) + 1;

$file = 'uploads/csv/stocks.csv';
file_put_contents($file, '');

$format = 'sabo';

for($x = 0; $x < $loop_times; $x++){

    $from = $x * $api_limit;
    $symbols_result = $db->query("SELECT * FROM symbols LIMIT '$api_limit' OFFSET '$from'");

    if($symbols_result->num_rows > 0){

        $symbols = array();
        while($row = $symbols_result->fetch_object()){
            symbols[] = $row->symbol;
        }

        $symbols_str = implode(',', $symbols);
        $stocks = $client->get("http://download.finance.yahoo.com/d/quotes.csv?s={$symbols_str}&f={$format}");

        file_put_contents($file, $stocks->getBody(), FILE_APPEND);
    }
}
?>

That’s it! The Yahoo Finance API is a really nice way of getting financial information about specific companies.