Wern Ancheta

Adventures in Web Development.

Implementing Video Calls With PeerJS

| Comments

Picking up from where we left off last time. Let’s now try to add a video on our simple calling app with PeerJS. If you haven’t read my previous tutorial, go ahead and read it as this article wouldn’t make sense if you haven’t yet.

First, we still need to use the same scripts we used on the last tutorial.

1
2
3
<script src="//cdn.peerjs.com/0.3/peer.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script src="//www.WebRTC-Experiment.com/RecordRTC.js"></script>

But for our HTML, we need to replace the audio element with video. We also set the video to autoplay so that as soon as the stream becomes available, the video starts playing.

1
2
<button id="start-call">start call</button>
<video controls autoplay></video>

For our custom script, we still have the getParameterByName function.

1
2
3
4
5
6
function getParameterByName(name){
    name = name.replace(/[\[]/, "\\[").replace(/[\]]/, "\\]");
    var regex = new RegExp("[\\?&]" + name + "=([^&#]*)"),
        results = regex.exec(location.search);
    return results === null ? "" : decodeURIComponent(results[1].replace(/\+/g, " "));
}

As for the getAudio function that we previously used for getting the audio input from the users device. We now replace it with getVideo:

1
2
3
function getVideo(successCallback, errorCallback){
    navigator.getUserMedia({audio: true, video: true}, successCallback, errorCallback);
}

When the call is received, we now call the getVideo function instead of getAudio.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
function onReceiveCall(call){

    console.log('peer is calling...');
    console.log(call);

    getVideo(
        function(MediaStream){
            call.answer(MediaStream);
            console.log('answering call started...');
        },
        function(err){
            console.log('an error occured while getting the video');
            console.log(err);
        }
    );

    call.on('stream', onReceiveStream);
}

Once a stream is received, we also need to replace the element that we’re selecting. So we now select the video element instead of audio.

1
2
3
4
5
6
7
8
function onReceiveStream(stream){
    var video = document.querySelector('video');
    video.src = window.URL.createObjectURL(stream);
    video.onloadedmetadata = function(){
        console.log('loaded');
    };

}

The code for getting the current user and the peer is also the same.

1
2
var from = getParameterByName('from');
var to = getParameterByName('to');

But for the creation of the peer, we now use the PeerServer Cloud service instead of our own since we already did that last time.

1
var peer = new Peer(from, {key: 'Your PeerJS API Key'});

Then we listen for the open event on the peer just so we can check if the peer has actually been created.

1
2
3
peer.on('open', function(id){
    console.log('My peer ID is: ' + id);
});

We also listen to the call event so we can receive incoming calls.

1
peer.on('call', onReceiveCall);

For the start call button click event, we use the getVideo function and proceed as usual.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$('#start-call').click(function(){

    console.log('starting call...');

    getVideo(
        function(MediaStream){

            console.log('now calling ' + to);
            var call = peer.call(to, MediaStream);
            call.on('stream', onReceiveStream);
        },
        function(err){
            console.log('an error occured while getting the video');
            console.log(err);
        }
    );

});

Conclusion

That’s it! We have implemented video calling using peerJS. Do note that this will consume more bandwidth than audio calls so the performance might be affected depending on the network.

Things I Learned While Writing for Sitepoint

| Comments

It’s been more than a year since I started writing articles for Sitepoint. For those who don’t know, Sitepoint is a provider of awesome content for web professionals. Anything web which you can think of, they have it. They have tutorials on HTML & CSS, JavaScript, PHP, Ruby, Mobile, Design & UX, Wordpress and even for web entrepreneurs. And they’ve been doing it since the year 2000, I believe. Going back to the main topic of this article, the things I learned while writing for Sitepoint. There’s a lot that I’ve learned especially on my writing skills. When I first started, I thought my grammar was already perfect. But I’ve never been so wrong. Here’s a list of things that I wish I knew when I first started:

  • When using PHP libraries, always install it via Composer whenever possible.
  • When installing a single PHP library, it should be done via the command line using the composer require instead of adding the configuration in the composer.json file. Here’s an example when installing the guzzle http library:
1
composer require guzzle/guzzle

If you’re using packagist, you can easily install the package by using the command they have provided if you open a link to a specific library.

  • Sharing files that are used in articles (SQL files, project files) should be done with Github if it’s a whole project or Gist if it’s just a single file. I made a mistake of uploading it to the public folder in my Dropbox before.

  • Use shorthand echo when outputting something with PHP. So instead of using <?php echo 'hello world!'; ?>, it should be done with <?= 'hello world!' ?>.

  • Always use a framework when the examples gets too big so that the readers can easily try out the demo.

  • When using a framework, exercise separation of concerns. All routes should be in the routes file, and the routes file shouldn’t contain anything else. I made a mistake of using a closure in the routes to respond to HTTP requests. This shouldn’t be. Best practices should always be used even the code isn’t used for a real project. So the routes should use a controller which will return the view or execute a specific function.

  • When making HTTP requests, such as when the article talks about a specific API. Guzzle should be used or some other HTTP library instead of Curl. Some readers might not have Curl installed.

  • In every article, the convenience of the readers should always be the priority. This means that it should be easy to read. If the article includes a sample project, it should be hosted using Github or Bitbucket. Some authors prefer having a single repository for each article. But for me I prefer having everything inside a single repository. This is because the projects or sample codes that I host in there aren’t really updated that much. I think there’s no point having each one in it’s own repository. My main purpose in hosting with Github is to give the readers a place to examine the code with syntax highlighting and the way each files relates to all the other files. So that they can easily setup a demo which they could play on in there local machine.

  • Write your article as if the reader is a beginner. Don’t make assumptions on the skills of the reader. But this doesn’t mean to say that you have to walk the reader through the installation of PHP or talk about the basics when you’re writing an article about a specific API that uses PHP to make HTTP requests. Every PHP developer would already know that. In the first place, the reader shouldn’t be reading your article while not knowing anything about PHP. There’s always a minimum amount of requisite knowledge. Another example is when telling a reader to install a specific library using Composer. Not all PHP developer knows about Composer. I can’t point you out to a statistic but always assume that there’s someone out there who still installs libraries using Pear or zip files. In those cases you don’t have to walk the reader through how to install Composer. Simply pointing the website out or linking to the page which shows how to install Composer should suffice.

  • Always try to include a demo as a supplement to the article. This is not something I’ve personally done. Because most of the articles I write is about PHP, which runs on the server. With client-side articles (HTML, CSS and JavaScript) this is easy since there’s Codepen, jsFiddle, jsbin, and many others which allows you to easily create a demo which the reader can use to have an idea what the output would be like.

  • Always give some time for the title of the article and the introduction. These are really important, this is what the readers sees the first when they come across your article in social media sites like Twitter. It’s the first selling point of the article so it’s important that it’s catchy.

  • Include screenshots to supplement a specific instruction or to show the readers the output.

  • Don’t just paste big blocks of code and explain it in a really long paragraph. Break down the block into parts and explain each part. Then you can paste the big block of code so the reader sees how it all comes together. Often times I do the alternate, so I paste the big block of code first and do a summary of what it does and then I break it down into multiple parts.

  • Always participate in the comments. It’s not just about writing the article, and having it published. If readers comments on your article or asks a question, you should try to answer the best way possible even if you don’t know the answer. Even if it’s not a direct question or it’s just an opinion by the reader. You should try to participate and include your own opinions as well. Honestly this is a part that I need to improve on. I don’t always participate in the comments.

  • Common grammatical errors. The common one’s for me were the use of were vs. we’re, its vs. it’s. Everyday vs. every day. And where to place the comma or if it’s even needed. I think I’ve improved when it comes to this. But it’s always nice to have a second pair of eyes looking at your work. For this I use the hemmingway editor. It grades the readability of an article, marks potential errors, and provides some really good tips about your article.

  • Use a bullet list instead of saying ‘next’ or ‘and then’ all the time. If a bullet list doesn’t feel right, connect sentences with commas.

  • Property casing. Use all-caps when referring to an acronym. One of those acronyms is ID. It should be ID instead of id.

  • Needless words should always be ommitted. Common offenders include the words: ‘always’, ‘just’, ‘basically’, and ‘simply’.

  • Be consistent with the use of ‘we’ or ‘you’ when referring to the reader. You will often see these 2 words in tutorials. But if you have started using ‘you’ to refer to the reader or ‘we’ if you’re a merry person who wants to include yourself while telling something to the reader. It’s important that you stick with whatever you started using. I prefer to use ‘we’ in most cases since ‘you’ sounds really lonely. Whereas if you use ‘we’, it has the connotation that you have gone through the same process that the reader is going through when you were writing the article.

  • Proofread your article 3 or more times to ensure that common grammatical errors were caught and the wording is easy in the eyes or comfortable to read. This means that the article should be readable without having to exert much mental effort or having to go back to a sentence you’ve just read because it didn’t make sense.

  • When referring to a specific library such as jQuery, always be mindful of how it’s written on the website of that specific library. For jQuery, the ‘j’ is a small letter and the ‘Q’ is a capital letter.

  • Always be mindful of the word count. If an article is meant to be a series then each part should have a word count of not greater than 3000 words.

  • Always strive to make the work of the editor easier so that they will be more motivated to review your work.

  • Recently, Sitepoint implemented the peer reviews which utilizes Github. How this works is that all the articles are stored in a Github repository. Every new article is a separate branch that’s going to be merged in the main branch. A pull request is created for each new article which is then reviewed by the other authors. The other authors will comment on your work or make the changes on their end. The original author can then make use of these comments to improve the original article. This kind of workflow has levelled up my Git skills. And through the help of the other authors, I’ve learned how to improve my articles by altering the wording, providing screenshots and using Frameworks when presenting code. The next step that I’m looking into is to also review the works of other authors. As a means of giving back and learning how the other authors construct their articles as well.

That’s it! I won’t treat this section as the conclusion as there will always be new things to learn. I’ll update this article in the future once I learn some more. Be sure to check out the resources below if you also want to level up your writing skills. And if you’re a web professional, you’re welcome to join Sitepoint. They’re always looking for new authors. It doesn’t matter if you’re new to the industry or an experienced one. As long as you have something to share, you’re welcome to write for Sitepoint. Oh and articles are paid really well so it’s worth the time investment.

Resources

Getting Started With Amazon Cloudfront

| Comments

When developing websites, it’s important to deliver front-end assets as fast as possible to the client. One tool that web developers use is the Content Delivery Network (CDN). Which is basically a way of distributing front-end assets (scripts, stylesheets and images) on servers across the globe so that the files will have to travel less distance. This works by having the nearest server deliver the file to the client. Nicholas Zakas has written a really good article on how content delivery networks work. You can check that out if you want to dive deeper. In this article we’re going to take a look at Amazon Cloudfront, which is the content delivery network offered by Amazon Web Services.

Setting Up a New Distribution

Amazon Cloudfront utilizes the files from your S3 bucket. First thing that you need to do is to go to the Amazon Web Services console, select Cloudfront from the list of services, select create distribution, then click on the ‘Get Started’ button under the Web section.

getting started

Once you’re redirected to the next page, you will be greeted by a form where you enter the details of your new distribution.

distribution details

Each distribution uses a specific S3 bucket and you can pick that on Origin Domain Name. It would look something like app-name.s3.amazonaws.com. Once you have selected the Origin Domain Name, the Origin ID will automatically get filled up. You can click on the help icon on each field to get information on what they are. Knowing that, you can just leave the optional fields as blank and stick with the default values. Once you’re done filling out the form, you can click on the ‘Create Distribution’ button. After creation, it will be listed as the top item in your list of distributions. Your new distribution won’t be immediately useable. You can see it from the status field in the table. Right after creation, its status would be ‘In Progress’. I’m not really sure what goes on behind the scenes during this time, but I assume it’s distributing all the files that is stored on the S3 bucket that you selected across different servers around the globe. Once your new distribution is ready, you can now use the domain name assigned to your distribution as the domain name when linking your files. Do note that files distributed using Cloudfront should be invalidated every time you make a change to them. So it’s not recommended to use Cloudfront when you’re still developing your app. As you frequently have to invalidate the files as you make changes to your code.

Invalidating Files

You will need to invalidate files when you make changes to a file in your S3 bucket. The changes won’t take effect in the distribution that’s why you need to invalidate. To do that, click the distribution on the list. Once in there, click the invalidations tab, click ‘create invalidation’ and enter the path of the file you want to invalidate. The path is relative to the root of your bucket. So if your bucket is named bookr and your file is at /uploads/users/image/image-001.jpg then use that as the path. Do note that invalidating a file can take a while so use it sparingly.

Conclusion

That’s it! In this tutorial, you have learned how to use Amazon’s Cloudfront as a solution for your CDN needs. It’s really easy to get setup if you’re already using S3 to serve your front-end assets.

Best Anime of All Time

| Comments

I decided to give my blog 3 weeks break so I could make time for the 200 other things that I want to do. But then I said “fuck it”. Its not just programming stuff that I can publish here on this blog. Its my personal blog after all. I can always publish some other stuff that won’t take much of my time to write. So this time I decided to disguise the list of the best anime of all time as an actual blog post. But of course, this is all just my opinion. We all have different taste so don’t take my word for it. Try watching 2 or 3 episodes and see for yourself. Ok here goes:

  • Psycho Pass
  • Code Geass
  • Samurai Champloo
  • Anohana
  • Guilty Crown
  • Xam’d: Lost Memories
  • Parasyte the Maxim
  • Durarara!!
  • Eden of the East
  • Darker than Black
  • Full Metal Alchemist: Brotherhood
  • Steins;Gate
    1. Gray Man
  • Hunter X Hunter
  • The Melancholy of Haruhi Suzumiya
  • K-on
  • Hajime no Ippo
  • Katanagatari
  • Tengen Toppa Gurren Lagan
  • Kill la Kill
  • Haikyuu!!
  • Kuroko no Basket
  • Gatchaman Crowds
  • Tsuritama
  • Death Note
  • Yu Yu Hakusho
  • Attack on Titan
  • Avatar: The Last Airbender
  • Avatar: The Legend of Korra
  • Mirai Nikki
  • Toradora!
  • Kaichou wa Maid-sama!
  • Medaka Box
  • Accel World
  • Deadman Wonderland
  • Magi
  • Shaman King
  • Baccano!
  • Sket Dance
  • Akame Ga Kill
  • Nanatsu no Taizai
  • Slam Dunk
  • Assasination Classroom
  • Oregairu
  • Shokugeki no Soma
  • Hitsugi no Chaika
  • One Week Friends
  • Kakumeiki Valvrave
  • Yowamushi Pedal
  • Hamatora
  • Zankyou no Terror
  • Bakuman
  • Usagi Drop
  • Hanasaku Iroha
  • Tiger & Bunny
  • A-Channel

That’s all I can think of for now. I really have a bad memory so even if I’ve watched a really really good anime, it might not have made it in this list.

Quick Tip: How to Add Custom Pages in Wordpress

| Comments

In this quick-tip I’ll be showing you the easiest and quickest way to create custom pages under a specific theme in Wordpress. When I say custom, its a page where you can put anything you want using HTML, CSS, JavaScript and PHP Code. The page would also have access to the various APIs that Wordpress provides.

To start, create a new file under your theme folder. In this case I’ll be creating a custom-page.php file under the wp-content/themes/twentyfifteen directory of my Wordpress installation. Then add the following code in the file:

1
2
3
4
5
6
<?php
/*
Template Name: My Awesome Custom Page
*/
?>
<h1>This is my awesome custom page</h1>

Yes, that’s all there is to it. Note that the Template Name: part is very important. You can assign any value that you want as long as its descriptive. This specific comment is used by Wordpress to recognize your file.

To assign this page to a specific Wordpress page. You can add a new page from Wordpress admin page and select the page that we have created under the Template drop-down:

custom wordpress page

Now when you access the page from your browser, you will get that awesome heading. From your custom page you can also use the methods available on all the Wordpress APIs and also the built-in theme functions such as the get_header and get_footer.

Getting Started With Amazon S3

| Comments

Amazon S3 is Amazon’s file storage service. It allows users to upload their files to their server, for later access or for sharing to other people. In this tutorial I’m going to walk you through how to use amazon s3 within your PHP applications.

First thing that you need to do is create a composer.json file and add the following:

1
2
3
4
5
{
    "require": {
        "aws/aws-sdk-php": "2.7.*@dev"
    }
}

Next execute composer install from your terminal to install the Amazon Web Service SDK.

Once the installation is done you can now create a tester.php file which we will use for interacting with the Amazon AWS API. Add the following code to the file:

1
2
3
4
5
6
<?php
require 'vendor/autoload.php';

use Aws\S3\Exception\S3Exception;
use Aws\Common\Aws;
?>

What the code above does is include the autoload file so that we can use the AWS SDK from our file. Next we set to use the Aws\S3\Exception\S3Exception and Aws\Common\Aws namespace so can access the different classes that are available in those namespaces. One of which classes is the Aws class which we can use to set the configuration options for the Bucket where we are trying to connect to. All we have to do is call the factory method and pass in the path to the configuration file:

1
2
3
<?php
$aws = Aws::factory('config.php');
?>

The configuration file contains the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<?php
return array(
    'includes' => array('_aws'),
    'services' => array(
        'default_settings' => array(
            'params' => array(
                'credentials' => array(
                    'key'    => 'YOUR_AWS_API_KEY',
                    'secret' => 'YOUR_AWS_API_SECRET',
                ),
                'region' => 'YOUR_BUCKET_REGION'
            )
        )
    )
);
?>

The configuration file basically just returns an array that contains the options that we need. First of those is the includes, which allows us to bootstrap the configuration file with AWS specific features. Next is the services where we specify the API credentials and region.

Uploading Files

Once that’s done we can now upload files to the s3 bucket of your choice by using the $aws object and calling the get method. This method takes up the name of the AWS service you want use. In this case were using s3 so we put in s3. Next we call the putObject method on the $s3 object and pass in the required parameters as an array. The required keys are Bucket, Key, Body and ACL. Bucket is the name of the bucket where you want to upload the file. Key is the path to the file. With s3 you don’t have to worry if the directory where you are uploading the file already exists. No matter how deep it is, s3 automatically creates the directories for you. Next is the Body which takes up the results of the fopen method call. This method takes up the path to the file in your local computer and the operation you want to perform. In this case we just want to read the file contents so we specify r. Next is the ACL or the Access Control List of an object. Its basically like a file permission. Here we specified public-read which means that the file can be read publically. For more information about ACL, you can check out this page. We wrap all of those code inside a try catch so we can handle errors gracefully.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<?php
$s3 = $aws->get('s3');

try{
    $s3->putObject(array(
        'Bucket' => 'NAME_OF_BUCKET',
        'Key' => '/path/to/file/filename',
        'Body' => fopen('/path/to/file_to_uploads', 'r'),
        'ACL' => 'public-read',
    ));
}catch (S3Exception $e){
    echo "There was an error uploading the file.<br>";
    echo $e->getMessage();
}
?>

Deleting Files

Next here’s how to delete existing files from your s3 bucket. This uses the deleteObject method which takes up the name of the bucket and the path to the file as its argument.

1
2
3
4
5
6
7
8
9
10
11
12
13
<?php
try{

    $s3->deleteObject(array(
        'Bucket' => 'NAME_OF_BUCKET',
        'Key' => '/path/to/file/filename'
    ));

}catch(S3Exception $e){
    echo "There was an error deleting the file.<br>";
    echo $e->getMessage();
}
?>

Listing Buckets

Lastly here’s how to get a list of buckets that are currently in your Amazon Account:

1
2
3
4
5
6
7
<?php
$result = $s3->listBuckets();

foreach ($result['Buckets'] as $bucket) {
    echo "{$bucket['Name']} - {$bucket['CreationDate']}<br>";
}
?>

Conclusion

That’s it! In this tutorial you’ve learned how to work with Amazon S3 from within your PHP applications. Specifically, we’ve taken a look at how to upload files, delete files and list buckets.

Resources

Building a Nearby Places Search App With Google Places API

| Comments

In this tutorial were going to build an app that would allow users to search for a specific place and then find nearby places based on a specific category. Such as restaurants, churches, and schools. We will implement the app with Google Maps, Google Places and PHP.

Getting API Credentials

First you need to get API Credentials from your Google Console and then enable the Google Maps and Google Places API. If you don’t know how to do that, feel free to ask Google. I believe this topic has already been written before. Here are the APIs that you need to enable:

  • Google Maps JavaScript API
  • Google Places API Web Service

Building the App

Now were ready to build the app. First lets work on the back-end side of things.

Getting Results from the Places API

To make our life easier, were going to use a library for making the request to the Google Places API. Add the following in your composer.json file:

1
2
3
4
5
{
    "require": {
        "joshtronic/php-googleplaces": "dev-master"
    }
}

Once you’re done, execute composer install on your terminal to install the library. Now we can use the library like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<?php

require 'vendor/autoload.php';

$google_places = new joshtronic\GooglePlaces('YOUR_GOOGLE_API_KEY');

$lat = $_POST['lat']
$lng = $_POST['lng'];
$place_types = $_POST['place_types'];

$google_places->location = array($lat, $lng);
$google_places->radius = 8046; //hard-coded radius
$google_places->types = $place_types;
$nearby_places = $google_places->nearbySearch();

?>

Breaking it down. First we include the autoload file so we can access the library from our file:

1
2
3
<?php
require 'vendor/autoload.php';
?>

Next, we created a new instance of the GooglePlaces class. You need supply the API Key that you got earlier from your Google Console:

1
2
3
<?php
$google_places = new joshtronic\GooglePlaces('YOUR_GOOGLE_API_KEY');
?>

Next, we get the data that we will be supplying later on in the client-side and assign them to their own variables:

1
2
3
4
5
<?php
$lat = $_POST['lat']
$lng = $_POST['lng'];
$place_types = $_POST['place_types'];
?>

Lastly, we make the actual request to the Google Places API. This library works a little bit different from your usual one. In the sense that we pass in the parameters needed by the actual search method using the object that we got from declaring a new instance of the GooglePlaces class. The first thing that we need to pass is the location, this takes up an array containing the coordinates (latitude and longitude) of the place that we are using as a reference point. This is basically the place where we are at, the place where we want to find nearby places on. Next you need to supply the radius. This is how many meters from your reference point you want your search to be limited. In this case we supplied a hard-coded value of 8046 meters, which is about 8 kilometers. If you want the user to have more control over this value, you can try adding a slider that the user can use to change the radius. And the last one is the types, this is an array of the types of places you want to see in the results. An example of this is restaurants (yeah I’m hungry so I mentioned this twice now), parks, shopping center, etc. Once you’ve supplied those, you can now call the nearbySearch method. This will make the request to the API and return the data that we need. We just have to turn it into a JSON string so it can be parsed and read later on from the client-side.

1
2
3
4
5
6
7
8
<?php
$google_places->location = array($lat, $lng);
$google_places->radius = 8046; //hard-coded radius
$google_places->types = $place_types;
$nearby_places = $google_places->nearbySearch();

echo json_encode($nearby_places);
?>

Creating the Map

Next we move on to the client-side. Create a new index.html file and put the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <title>gmap</title>
  <link rel="stylesheet" href="style.css">
  <script src="http://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script>
  <script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?key=YOUR_GOOGLE_API_KEY&sensor=false&libraries=places"></script>
</head>
<body>
  <div id="map-container">
    <input type="text" id="search">
    <div id="map-canvas"></div>
  </div>

  <div id="place-types">
    <ul>
      <li>
        <input type="checkbox" data-type="bar"> bar
      </li>
      <li>
        <input type="checkbox" data-type="bus_station"> bus station
      </li>
      <li>
        <input type="checkbox" data-type="hospital"> hospital
      </li>
      <li>
        <input type="checkbox" data-type="health"> health
      </li>
      <li>
        <input type="checkbox" data-type="police"> police
      </li>
      <li>
        <input type="checkbox" data-type="post_office"> post office
      </li>
      <li>
        <input type="checkbox" data-type="store"> store
      </li>
      <li>
        <input type="checkbox" data-type="library"> library
      </li>
      <li>
        <input type="checkbox" data-type="fire_station"> fire station
      </li>
      <li>
        <input type="checkbox" data-type="gas_station"> gas station
      </li>
      <li>
        <input type="checkbox" data-type="convenience_store"> convenience store
      </li>
      <li>
        <input type="checkbox" data-type="school"> school
      </li>
    </ul>
    <button id="find-places">Find Places</button>
  </div>

  <script src="map.js"></script>
</body>
</html>

Breaking it down. We include the stylesheet in the page:

1
<link rel="stylesheet" href="style.css">

Then we include jQuery and the Google Maps JavaScript library. Be sure to update the code so it uses your Google API Key:

1
2
<script src="http://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script>
  <script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?key=YOUR_GOOGLE_API_KEY&sensor=false&libraries=places"></script>

Next is the map container where we have map-canvas that will serve as the element where the map will be created. And the search box where the user will search for the place that will be used as a reference point:

1
2
3
4
<div id="map-container">
    <input type="text" id="search">
    <div id="map-canvas"></div>
  </div>

And then the type of places that we can find. Note that this isn’t everything we can find in Google Places API. I just picked some of the places that I think are essential. For a more complete list you can check this page. Here we added the data-type attribute which represents the place type. And then after the list we have the ‘Find Places’ button which basically just triggers the search:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
<div id="place-types">
  <ul>
    <li>
      <input type="checkbox" data-type="bar"> bar
    </li>
    <li>
      <input type="checkbox" data-type="bus_station"> bus station
    </li>
    <li>
      <input type="checkbox" data-type="hospital"> hospital
    </li>
    <li>
      <input type="checkbox" data-type="health"> health
    </li>
    <li>
      <input type="checkbox" data-type="police"> police
    </li>
    <li>
      <input type="checkbox" data-type="post_office"> post office
    </li>
    <li>
      <input type="checkbox" data-type="store"> store
    </li>
    <li>
      <input type="checkbox" data-type="library"> library
    </li>
    <li>
      <input type="checkbox" data-type="fire_station"> fire station
    </li>
    <li>
      <input type="checkbox" data-type="gas_station"> gas station
    </li>
    <li>
      <input type="checkbox" data-type="convenience_store"> convenience store
    </li>
    <li>
      <input type="checkbox" data-type="school"> school
    </li>
  </ul>
  <button id="find-places">Find Places</button>
</div>

And then lastly we include the map.js file which will make this all work:

1
<script src="map.js"></script>

Next create the style.css file and put the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#map-container {
  float: left;
}

#map-canvas {
  height: 500px;
  width: 1000px;
}

#place-types {
    float: left;
}

#place-types ul li {
    list-style: none;
}

Finally we move on to the map.js file. First declare the default coordinate of the place that the map will display:

1
2
3
var lat = 18.35827827454; //default latitude
var lng = 121.63744354248; //default longitude
var home_coordinates = new google.maps.LatLng(lat, lng); //set default coordinates

Next, assign it to the map:

1
2
3
4
5
var map_options = {
  center: new google.maps.LatLng(lat, lng), //set map center
  zoom: 17, //set zoom level to 17
  mapTypeId: google.maps.MapTypeId.ROADMAP //set map type to road map
};

Next we set the search box as an auto-complete element. This will allow the user to see suggestions of matching locations as he types in the search box. We also need to bind it to the map so the auto-complete bounds are driven by the current viewport of the map.

1
2
3
var input = document.getElementById('search'); //get element to use as input for autocomplete
var autocomplete = new google.maps.places.Autocomplete(input); //set it as the input for autocomplete
autocomplete.bindTo('bounds', map); //bind auto-complete object to the map

Next we listen for the place_changed event that is triggered from the search box. If this event happens then we get the place information using the getPlace method available on the auto-complete object. This allows us to check if the place being searched is within the current map viewport. If it is then we just call the fitBounds method on the map object and pass in the geometry.viewport attribute from the place object. This sets the map center to the coordinates of the location. If its not within the current viewport then we call the setCenter method in the map object and pass in the geometry.location attribute in the place object. We also call the setZoom method in the map to ensure we still got the same zoom level. Lastly we set the position of the home_marker to the geometry.location in the place object.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
//executed when a place is selected from the search field
google.maps.event.addListener(autocomplete, 'place_changed', function(){

    //get information about the selected place in the autocomplete text field
    var place = autocomplete.getPlace();

    if (place.geometry.viewport){ //for places within the default view port (continents, countries)
      map.fitBounds(place.geometry.viewport); //set map center to the coordinates of the location
    } else { //for places that are not on the default view port (cities, streets)
      map.setCenter(place.geometry.location);  //set map center to the coordinates of the location
      map.setZoom(17); //set a custom zoom level of 17
    }

    home_marker.setMap(map); //set the map to be used by the  marker
    home_marker.setPosition(place.geometry.location); //plot marker into the coordinates of the location

});

Next we declare an array that will store the markers for the places that will be searched. Don’t confuse this with the place used as the reference point, the home_marker is used for this. The places I’m referring to are the place types such as grocery, church, etc. For convenience I’ll be referring to those markers as place type markers.

1
var markers_array = [];

Next create the method that would remove the place type markers from the map. We would need to call this every time the user clicks on the ‘Find Places’ button so that the previous search results will be removed from the map.

1
2
3
4
5
function removeMarkers(){
  for(i = 0; i < markers_array.length; i++){
    markers_array[i].setMap(null);
  }
}

Finally we have the method that listens for the click event on the ‘Find Places’ button. The first thing it does is to get the coordinates of the home_marker. This represents the coordinates of the reference point. After that, we declare an empty array, this is where we will store the place types selected by the user. We do that by looping through all the place types selected by the user and then we push the value for their data-type attribute in the array. Next we call the removeMarkers method to remove the place types markers that are currently plotted on the map. Next we make a POST request to the server and then passing in the coordinates of the reference point and the place types array. Once we get a response, we call the JSON.parse method so we can extract the results from the response. From there we loop through all the results and get the coordinates for each and then we plot the marker into the map. After that we assign an infowindow to each of the markers to that when its clicked it shows the name of the place.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
$('#find-places').click(function(){

  var lat = home_marker.getPosition().lat();
  var lng = home_marker.getPosition().lng();

  var place_types = [];

  //loop through all the place types that has been checked and push it to the place_types array
  $('#place-types input:checked').each(function(){
    var type = $(this).data('type');
    place_types.push(type);
  });

  removeMarkers(); //remove the current place type markers from the map

  //make a request to the server for the matching places
  $.post(
    'places.php',
    {
      'lat': lat,
      'lng': lng,
      'place_types': place_types
    },
    function(response){

      var response_data = JSON.parse(response);

      if(response_data.results){
        var results = response_data.results;
        var result_count = results.length;

        for(var x = 0; x < result_count; x++){

          //get coordinates of the place
          var lat = results[x]['geometry']['location']['lat'];
          var lng = results[x]['geometry']['location']['lng'];

          //create a new infowindow
          var infowindow = new google.maps.InfoWindow();

          //plot the marker into the map
          marker = new google.maps.Marker({
            position: new google.maps.LatLng(lat, lng),
            map: map,
            icon: results[x]['icon']
          });

          markers_array.push(marker);

          //assign an infowindow to the marker so that when its clicked it shows the name of the place
          google.maps.event.addListener(marker, 'click', (function(marker, x){
            return function(){
              infowindow.setContent("<div class='no-scroll'><strong>" + results[x]['name'] + "</strong><br>" + results[x]['vicinity'] + "</div>");
              infowindow.open(map, marker);
            }
          })(marker, x));


        }
      }

    }
  );

});

Here’s a screenshot of the final output:

google places

Conclusion

That’s it! In this tutorial you’ve learned how to work with the Google Place API in PHP. We have also create a simple app that would allow users to search specific types of places that is near the location used as a reference point. If you want to learn more, be sure to check out the resources below.

Resources

Working With Youtube Data API in PHP

| Comments

Decades ago I got this project where I needed to work with the Youtube API to get the details of videos uploaded by a specific channel. And then create something like a mini-youtube website out of it. Just kidding about the decades part, it was probably 4-6 months ago. Anyway its only this time that I got the time to actually write about it. So here it goes.

Getting API Credentials

First you need to get the API credentials from your Google Console. There’s only a single API credential for all of the APIs that Google offers. So you might already have one. If you do then all you have to do is enable the API in your Google Console page. Currently you would see something like this when you go to APIs & Auth and then click on APIs on your Google Console:

google apis

What we need is the Youtube Data API v3. Click that and enable it. If you do not have an API credential then you can click on ‘Credentials’ under the APIs & Auth and click on ‘Create new Key’ under the Public API Access section. Choose Server Key as the key type since were working primarily on the server. Don’t take my word for it though. Based on my experience sometimes this doesn’t work and you actually need to select Browser Key. I just hope google has fixed this already. Server keys are only supposed to be used in the server and browser keys on the client side. Clicking on either browser key or server key will generate an API Key for you. This is the key that you will use when you need to talk to the Youtube API.

Dependencies

As we are primarily going to be requesting data from another server, we will need curl. If you don’t have it yet, install it on your system. Here’s how you install it on Ubuntu:

1
2
3
sudo apt-get install curl
sudo apt-get update
sudo apt-get install libcurl3 php5-curl

If you’re using another Operating System then feel free to ask Google.

Playing with the API

To make things easier we need a library that will do most of the heavy-lifting for us. Things like signing the request, constructing it and actually making the request to the server. Because were lazy folks we don’t need to do that every time we need to talk to an API. Thankfully an awesome guy in the alias of madcoda has already done that work for us. If you already have composer installed, simply execute the following command inside your project directory:

1
composer require madcoda/php-youtube-api

This will install the library into your vendor directory, autoload it and add it to your composer.json file.

Once its done you can now use the library by including the autoload.php file under the vendor directory and then use the Madcoda\Youtube namespace.

1
2
3
4
5
<?php
require 'vendor/autoload.php';

use Madcoda\Youtube;
?>

Next create a new instance of the Youtube class and pass in the API Key that you acquired earlier as the key item in an array.

1
2
3
<?php
$youtube = new Youtube(array('key' => 'YOUR_API_KEY'));
?>

Searching

With this library you can search for videos, playlists and channels by using the search method. This method takes up your query as its argument. For example you want to find ‘Awesome’:

1
2
3
<?php
$results = $youtube->search('Awesome');
?>

This will return something similar to the following if you use print_r on the $results:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Array
(
[0] => stdClass Object
    (
        [kind] => youtube#searchResult
        [etag] => "tbWC5XrSXxe1WOAx6MK9z4hHSU8/xBkrpubrM2M6Xi88aNBfaVJV6gE"
        [id] => stdClass Object
            (
                [kind] => youtube#video
                [videoId] => qmTDT92VIRc
            )

        [snippet] => stdClass Object
            (
                [publishedAt] => 2015-01-23T23:03:31.000Z
                [channelId] => UCZpKcVBccIjO9n0RXx3ZGFg
                [title] => PEOPLE ARE AWESOME 2015 (UNBELIEVABLE)
                [description] => People are Awesome 2015 unbelievable talent and natural skills! Subscribe to NcCrullex for more people are awesome videos. Chris Samba Art Channel: ...
                [thumbnails] => stdClass Object
                    (
                        [default] => stdClass Object
                            (
                                [url] => https://i.ytimg.com/vi/qmTDT92VIRc/default.jpg
                            )

                        [medium] => stdClass Object
                            (
                                [url] => https://i.ytimg.com/vi/qmTDT92VIRc/mqdefault.jpg
                            )

                        [high] => stdClass Object
                            (
                                [url] => https://i.ytimg.com/vi/qmTDT92VIRc/hqdefault.jpg
                            )

                    )

                [channelTitle] => NcCrulleX
                [liveBroadcastContent] => none
            )

    )

As you can see most of the data that you might want is stored in the snippet item. Things like the title, description and URL to the thumbnails.

You might ask, how you would know if the item is a video, playlist or channel? You might have already noticed based on the results above. Its located under the id –> kind. It would have a kind of youtube#video if its a video. youtube#channel if its a channel and youtube#playlist if its a playlist. Don’t believe me? Try using the API to search for ‘the new boston’ and you’ll see.

If you only want to search for videos then you can use the searchVideos method. Just like the search method this takes up your query as its argument:

1
2
3
<?php
$results = $youtube->searchVideos('Ninja');
?>

If you only want to get videos from a specific channel, you can do it in 2 calls. First you need to get the channel id by using the getChannelByName method and then extract the id from the result that you get and then use the id for the searchChannelVideos to search for videos in a specific channel:

1
2
3
4
<?php
$channel = $youtube->getChannelByName('thenewboston');
$results = $youtube->searchChannelVideos('ruby', $channel->id);
?>

The code above would return the first page of results for the ‘ruby’ videos in ‘thenewboston’ channel.

If you only want to return playlists on a specific channel, you can do:

1
2
3
4
<?php
$channel = $youtube->getChannelByName('thenewboston');
$results = $youtube->getPlaylistsByChannelId($channel->id);
?>

If you want to get the items in a playlist, you can do it in 3 calls:

1
2
3
4
5
<?php
$channel = $youtube->getChannelByName('thenewboston');
$playlists = $youtube->getPlaylistsByChannelId($channel->id);
$playlist_items = $youtube->getPlaylistItemsByPlaylistId($playlists[0]->id);
?>

If you want to be more liberal with your search, you can use the searchAdvanced method:

1
2
3
4
5
6
7
<?php
$results = $youtube->searchAdvanced(array(
    'q' => 'fruits',
    'part' => 'snippet',
    'order' => 'rating'
));
?>

Here’s a breakdown of the parameters we’ve just used:

  • q – your query
  • part – the part of the result which you want to get. Earlier in the sample result we saw that there are only 2 parts. id and snippet. This parameter allows you to specify that. If you only need the video, playlist or channel id then supply id as the part. If you need the full details then use snippet. If you need both then you can use a comma-separated list: id, snippet.
  • order – the basis of the ordering. In the example we used rating. This orders the results based on the highest ratings to the lowest. Not really sure what the ratings is. But the first thing that comes to mind is the number of likes in the video. You can also use viewCount if you want. This will order the results with the videos, playlists or channels which has the highest number of views to the lowest.
  • type – the type of item. This can either be video, playlist, or channel.

There’s a whole bunch more which you can specify as a parameter. Be sure to check out the search reference.

Pagination

You can also paginate results. First you need to make an initial request so you can get the nextPageToken. Then check if the page token exists, if it does then add a pageToken item to the parameters that you supplied earlier. And then make another request. Since we supplied the nextPageToken, this will now navigate to the second page of the same result set. By default the youtube data api only returns 10 rows per request. This means that the second page will show you row 11 up to 21.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<?php
$params = array(
    'q' => 'Ruby',
    'type' => 'video',
    'part' => 'id, snippet',
    'maxResults' => 100
);

$search = $youtube->searchAdvanced($params, true);

//check for a page token
if(isset($search['info']['nextPageToken'])){
    $params['pageToken'] = $search['info']['nextPageToken'];
}

//make another request with the page token added
$search = $youtube->searchAdvanced($params, true);

//do something with the search results her
?>         

You can also use the paginateResults method to implement pagination. Just like the method above, we need to make an initial request to get the nextPageToken. We then store it to an array so we can navigate through the results easily. The paginateResults method takes up the original search parameters as its first argument and the page token as its second. So all you have to do is supply the nextPageToken that you got from the previous results as the second argument for the paginateResults method to navigate to the next page. Note that in the example below, the indexes for the $page_tokens are just hard-coded. You will have to implement the generation of pagination links yourself and then use their index when navigating through the results. Also note that the results aren’t cached, this means that whenever you paginate through the results a new request is made to the youtube data api. You will also need to implement caching if you don’t want to easily run out of requests you can make.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
<?php
//your search parameters
$params = array(
    'q' => 'Python',
    'type' => 'video',
    'part' => 'id, snippet',
    'maxResults' => 100
);

//array for storing page tokens
$page_tokens = array();

//make initial request
$search = $youtube->paginateResults($params, null);

//store page token
$page_tokens[] = $search['info']['nextPageToken'];

//navigate to the next page
$search = $youtube->paginateResults($params, $page_tokens[0]);

//store page token token
$page_tokens[] = $search['info']['nextPageToken'];

//navigate to the next page
$search = $youtube->paginateResults($params, $page_tokens[1]);

//store page token
$page_tokens[] = $search['info']['nextPageToken'];

//navigate to the previous page
$search = $youtube->paginateResults($params, $page_tokens[0]);

//do something with the search results here
?>

Conclusion

That’s it! In this tutorial you’ve learned how to work with the Youtube Data API in PHP. You’ve learned how to get the info of a specific video, get general details about videos in a specific channel, get the videos in a specific playlist, and also search for videos, playlists and channels using a query. Don’t forget to work through the API request limits though. The limit information can be found on the Youtube Data API page on your Google Console.

Resources

Creating a Chrome Extension

| Comments

In this tutorial I’ll be showing you how to create a very basic chrome extension. One that would allow us to schedule posts with the Ahead project that I created. Here’s how it will work:

  1. User clicks on the extension on a page that he wants to share on a future time.
  2. The extension makes a request to the server where Ahead is currently hosted.
  3. The server returns a response and it is then outputted by the extension.

Creating the Extension

Before anything else we need to create the manifest.json file. This is the most important file since chrome won’t be able to recognize our extension if we do not have this file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{
  "manifest_version": 2,
  "name": "Ahead",
  "version": "1.0",
  "description": "Easily schedule posts",

  "browser_action": {
    "default_icon": "icon.png"
  },

  "background": {
    "scripts": ["background.js"]
  },

  "content_scripts": 
    [
        {
            "matches":["<all_urls>"],
            "js":["content.js"],
            "run_at": "document_end"
        }
    ],
  
  "permissions": ["<all_urls>", "storage"],
  "options_page": "options.html"
}

Breaking it down:

  • manifest_version – this is the version of the manifest file. The Chrome browser has been around for quite a while now. So are the extensions that have been written when it first came out. Currently the latest version that we can assign to a manifest file is 2.

  • name – the name you want to give to the extension.

  • version – the version of the extension.
  • description – a descriptive text you want to show your users. This is the text that will show right under the name of the extension when the user accesses the chrome://extensions page.
  • browser_action – used to specify the element which will trigger the extension. In this case we want an icon to be the trigger so we set the default_icon. The value would be the filename of the icon.
  • content_scripts – these are the scripts that run in the context of the current web page. The matches property is where you specify an array of URL’s where the content scripts can run. In this case we just set a special value called "<all urls>". This way the script can run from any webpage. Next is the js property where we specify an array of items containing the path to the content scripts. Last is the run_at property where we specify when to run the content scripts. We just set it to document_end so we can make sure that the whole page is loaded before we execute our script.
  • background – used to specify the background scripts. Content scripts only has access to the elements in the current page but not the Chrome API methods. So we need a background script in order to access those methods. This property simply takes up a single property called scripts where you specify an array of the background scripts you wish to use. In thise case were just going to use a single background.js file.
  • permissions – this is where we can specify an array containing the list of items that the extension needs to use or has access in. In this case were just going to use "<all_urls>" and storage. We use storage to have access to the methods used for saving custom settings for the extension. In our case the setting would be the api key required by Ahead.
  • options_page – used for specifying which HTML file will be used for the options page.

Next let’s proceed with the options page:

1
2
3
4
5
6
7
8
9
10
11
12
13
<!DOCTYPE html>
<html>
<head><title>Ahead</title></head>
<body>

    API Key:
    <input type="text" id="api_key">

    <button id="save">Save</button>

    <script src="options.js"></script>
</body>
</html>

You can use css just like you would in a normal HTML page if you want. But for this tutorial we won’t. The options page is pretty minimal. All we need is the actual field, a button to save the settings and then a link to the options page JavaScript file.

Here’s the options.js file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
function save_options(){
  var api_key = document.getElementById('api_key').value;

  chrome.storage.sync.set({
    'api_key': api_key
  },
  function(){
    alert('API Key Saved!');
  });
}


function restore_options(){

  chrome.storage.sync.get({
    'api_key': ''
  },
  function(items){
    document.getElementById('api_key').value = items.api_key;
  });
}
document.addEventListener('DOMContentLoaded', restore_options);
document.getElementById('save').addEventListener('click',
    save_options);

In the above file we declared 2 methods. save_options and restore_options. save_options is used for saving the settings to chrome storage. And restore_options is for retrieving the settings from the storage and populating the value for each of the fields. In the options.js file we got access to the chrome storage API. The main methods that were using are the sync.set and sync.get. We use sync.set to save the settings in the chrome storage and then output an alert box saying the settings are saved when its successful. sync.get on the other hand is used for retrieving the existing setting from chrome storage and then we use the retrieved value to populate the text field. The save_options method is called when the save button is clicked. And the restore_options method is called when the DOM of the options page has been fully loaded.

Next is the background.js file. We primarily use this file to listen for the click event on the browser_action which is basically the icon of extension that is located on the upper right corner of Chrome:

1
2
3
4
5
6
7
chrome.browserAction.onClicked.addListener(function(tab){

  chrome.tabs.query({active: true, currentWindow: true}, function(tabs) {
    var activeTab = tabs[0];
    chrome.tabs.sendMessage(activeTab.id, {"message": "clicked_browser_action"});
  });
});

You don’t need to worry about the code above too much. All it does is listen for the click event on the icon of the extension. It then uses the tabs.sendMessage method to send a message to the current tab that hey the icon extension has been clicked. This then brings us to the content.js file which basically just waits for this message to be sent. Once it receives the message we then retrieve the api key using the sync.get method. Once we retrieved the api key we make a POST request to the Ahead URL which is responsible for accepting POST requests for posts to be published. The content would be the title of the current page and then its URL. We then construct a new form data and supply the queue, api_key and content as the fields. We set the queue to true because we want to schedule the post to be published later. If you set it to false then it will be published immediately. Next is the api_key. We simply supply what we got from chrome storage as the value. And last is the content. We then send this form data to the Ahead URL. Finally we listen for the onload event on the request. This event is fired up whenever the request is successful. All we have to do is parse the response since its a JSON string. We then alert the value for the text property. Which is basically just a message saying that the post was scheduled and when it will be published. If we do get an error, the onerror event is fired and we simply tell the user to try again by using an alert.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
chrome.runtime.onMessage.addListener(
  function(request, sender, sendResponse){

    chrome.storage.sync.get({
        'api_key': ''
    },
    function(items){
        var api_key = items.api_key;

        var http_request = new XMLHttpRequest();
        http_request.open('POST', 'http://ec2-54-68-251-216.us-west-2.compute.amazonaws.com/api/post', true);
        var content = document.title + ' ' + window.location.href;
        var form_data = new FormData();
        form_data.append('queue', true);
        form_data.append('api_key', api_key);
        form_data.append('content', content);
        http_request.send(form_data);

        http_request.onload = function(){
            if(http_request.status >= 200 && http_request.status < 400){
              var response_data = JSON.parse(http_request.responseText);
              alert(response_data.text);
            }
        };


        http_request.onerror = function() {
            alert('Something went wrong while trying to post. Please try again');
        };
    });


  }
);

Installing the Extension

Now were ready to actually install the extension. You can do that by enabling developer mode on the chrome extensions page:

1
chrome://extensions/

This will show you 3 new buttons: load unpacked extension, pack extension and update extensions now. All we need is the first one. Click on it then select the folder that contains the manifest.json file on its root directory. Chrome will then list it as one of the available extensions:

extensions

Once its loaded, click on the ‘options’ link to access the options page. From there add the api key which you can get from the Ahead website.

At this point all of the new tabs that you open or the existing tabs which you reload would be useable with the extension. Just click on the extension icon and it will schedule a post using the title of the page and its URL as the content.

Conclusion

That’s it! In this tutorial you’ve learned the basics of how to create a chrome extension. You’ve learned how to listen for the click event on the extension icon, how to add an options page and how to get the details from the current page.

Getting Started With Lumen

| Comments

In this tutorial I’ll walk you through Lumen, a lightweight framework from the same guys that made Laravel. Lumen is basically a lighter version of Laravel.

Installation

You can install Lumen by using composer’s create-project command. Simply execute the following command on your preferred install directory:

1
composer create-project laravel/lumen --prefer-dist

Once the installation is done, you can navigate to the lumen directory and execute the following:

1
php artisan serve --port=7771

This will serve the project on port 7771 of your localhost:

1
http://localhost:7771/

If the installation completed successfully, you will be greeted by the default screen.

Using Third Party Libraries

You can use third party libraries with Lumen by adding the package that you want to install in the composer.json file. Here’s an example:

1
2
3
4
5
6
"require": {
    "laravel/lumen-framework": "5.0.*",
    "vlucas/phpdotenv": "~1.0",
    "elasticsearch/elasticsearch": "~1.0",
    "guzzlehttp/guzzle": "~5.0"
},

Note that the lumen-framework and phpdotenv is there by default since those are needed in order for Lumen to work. In the above file we have added elasticsearch and guzzlehttp as our dependencies.

You can then make Lumen aware of these libraries by initializing them on the files where you want to use them:

1
2
3
4
<?php
$http_client = new \GuzzleHttp\Client();
$es_client = new \Elasticsearch\Client();
?>

Configuration

By default Lumen is pretty bare-bones. Which means that we need to do some configuration if we want to use some of the features that we usually have in Laravel. In Lumen you can enable most of those functionalities by editing the bootstrap/app.php file.

Enabling Sessions

You can enable sessions by removing the comment on the middleware which says Illuminate\Session\Middleware\StartSession:

1
2
3
4
5
6
7
8
9
<?php
$app->middleware([
    //'Illuminate\Cookie\Middleware\EncryptCookies',
    //'Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse',
    'Illuminate\Session\Middleware\StartSession',
    //'Illuminate\View\Middleware\ShareErrorsFromSession',
    //'Laravel\Lumen\Http\Middleware\VerifyCsrfToken',
]);
?>

Enabling Eloquent

If you need to use Eloquent in your app, you can enable it by removing the comment on the following lines:

1
2
3
4
<?php
$app->withFacades();
$app->withEloquent();
?>

Dot Env

Lumen uses a .env file to set the environment configuration for the project. This way you can have a different .env file on your local machine and on your server. And then you can set git to ignore this file so that it doesn’t get pushed along to the server when you deploy your changes. Here’s how the .env file looks by default:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
APP_ENV=local
APP_DEBUG=false
APP_KEY=SomeRandomKey!!!

APP_LOCALE=en
APP_FALLBACK_LOCALE=en

DB_CONNECTION=mysql
DB_HOST=localhost
DB_DATABASE=homestead
DB_USERNAME=homestead
DB_PASSWORD=secret

CACHE_DRIVER=memcached
SESSION_DRIVER=memcached
QUEUE_DRIVER=database

As you can see from the file above, you can set the name of the environment by setting the value for APP_ENV. The next one right after that is the APP_DEBUG configuration which is set to false by default. If you’re developing you need to set this to true so you have an idea what’s wrong when testing your app. Next is APP_KEY which is basically used as a salt for sessions. You can use a random string generator for this. APP_LOCALE and APP_FALLBACK_LOCALE are used for setting the language of your app. This is set to english by default. Next are the database configuration. Anything which starts with DB_ is the database configuration. By default its expecting to connect to a mysql database. DB_HOST is the host in which the database is running. DB_DATABASE is the name of the database you want to connect to. DB_USERNAME is the username of the user you want to use for logging in. DB_PASSWORD is the password of the user. After the database configuration are the cache, session and queue driver configuration. The cache and session driver are using memcached by default so you’ll have to install memcached if you’re using caching and session functionalities. If memcached is not present in the system then it will just fallback to the default one which is the filesystem.

Note that before you can use the .env file, you need to uncomment the following line in your bootstrap/app.php file. This way Lumen will load the .env file on the root of your project.

1
Dotenv::load(__DIR__.'/../');

Directory Structure

Here’s what the default directory structure looks like in Lumen. The one’s with * are files:

1
2
3
4
5
6
7
8
9
10
11
app
bootstrap
database
public
resources
storage
tests
vendor
*artisan
*server.php
*composer.json

The app directory is where you will usually work with. This is where the routes, controllers and middlewares are stored.

The bootstrap directory only contains one file by default, the app.php file. As you have seen earlier, its where you can configure and add new functionality to Lumen.

The database directory is where the database migrations and seeders are stored. You use migrations to easily jump from previous database version to another. Its like version control for your database. Seeds on the other hand are used to populate the database with dummy data so that you can easily test your app without having to enter the information through the app itself.

The public directory is where your public assets are stored. Things like css, javascript and images are stored in this directory.

The resources directory is where you store the views that you use for your app.

The storage directory is where logs, sessions and cache files are stored.

The tests directory is where you put your test files.

The vendor directory is where the dependencies of your app is stored. This is where composer installs the packages that you specified in your composer.json file.

The artisan file is the file that is used for command line tasks for your project. We have used it earlier when we served the project. The artisan file can also be used to create migrations, seeds and other tasks that you usually perform through the command line.

The server.php file is used for serving the files without the use of a web server like Apache.

Routes

Routes are stored in the app/Http/routes.php file. Here’s how you would declare a route in Lumen:

1
2
3
4
5
<?php
$app->get('/', functionn(){
    return 'Hello World!';
});
?>

If you want to use a controller method to handle the response for a specific route then you can do something like this:

1
2
3
<?php
$app->get('/', 'App\Http\Controllers\HomeController@index');
?>

Then you would need to create a HomeController controller and then declare an index method. This will then be used to return a response.

Controllers

Controllers are stored in the app/Http/Controllers directory. Needless to say, the convention is one file per controller. Otherwise it would be really confusing. Here’s the basic structure of a controller:

1
2
3
4
5
6
7
8
9
10
<?php
<?php namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Laravel\Lumen\Routing\Controller as BaseController;

class HomeController extends BaseController{

}
?>

Note that we need to use Illuminate\Http\Request to be able to access the request parameters for each request. We also need to use Laravel\Lumen\Routing\Controller. This allows us to extend the functionality of the base controller class.

Views

Lumen still comes with blade templating, all you have to do is create your views under the resources/views directory and then use .blade.php as the file extension. Though unlike Laravel you return views this way:

1
2
3
4
5
<?php
public function index(){
    return view('index');
}
?>

In the example above were returning the index view that is stored in the root of the resources/views directory. If we want to return some data, then we can pass it by supplying the array or object that we want to pass:

1
2
3
4
5
6
7
8
<?php
$array = array(
    'name' => 'Ash Ketchum',
    'pokemon' => 'Pikachu'
);

return view('index', $array);
?>

It can then be rendered in the view like so:

1
2
3
4
5
6
7
8
9
10
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>test</title>
</head>
<body>
    Hi my name is , my Pokemon is 
</body>
</html>

Database

When working with a database you first need to edit the database configuration values in your .env file.

Migrations

Once that’s done you can try if your app can connect to your database by creating a database migration. You can do that by executing the following command in the root directory of your project:

1
php artisan migrate:install

The command above creates the migration table in your database. The migration table is used by Lumen to keep track of which database migrations are currently applied to your database. If that worked without problem and you see that a migrations table has been created in your database then you’re good to go.

Next you can create a new table by using the make:migration command. This takes up the action that you wish to do. In this case we want to create a new table so we use --create and then supply the name of the table as the value. The second argument will be the name that will be assigned to the migration class.

1
php artisan make:migration --create=users create_users_table

The command above will create a file which looks like the following in the database/migrations directory:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
<?php

use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;

class CreateUsersTable extends Migration {

    /**
     * Run the migrations.
     *
     * @return void
     */
    public function up()
    {
        Schema::create('users', function(Blueprint $table)
        {
            $table->increments('id');
            $table->timestamps();
        });
    }

    /**
     * Reverse the migrations.
     *
     * @return void
     */
    public function down()
    {
        Schema::drop('users');
    }

}
?>

The only thing that we need to edit here are the method calls inside the up method:

1
2
3
4
5
6
7
8
<?php
Schema::create('users', function(Blueprint $table)
{
    $table->increments('id');
    $table->string('name');
    $table->integer('age');
});
?>

That is where we specify the fields that we need to add to the users table.

Once you’re happy with the file, save it and then run:

1
php artisan migrate

This will create the table in your database and add a new row to the migrations table.

Seeds

You can create a new database seeder file inside the database/seeds directory. Here’s the usual structure of a seeder:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<?php

use Illuminate\Database\Seeder;

class UserTableSeeder extends Seeder
{
    public function run()
    {

        //seeding code       

    }
}
?>

Inside the run method is the actual seeding code. We can use your usual Laravel flavored database queries inside of it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<?php
DB::table('users')->insert(
    array(
        'name' => 'Ash Ketchum',
        'age' => 10
    )
);

 DB::table('users')->insert(
    array(
        'name' => 'Brock',
        'age' => 15
    )
);

DB::table('users')->insert(
    array(
        'name' => 'Misty',
        'age' => 12
    )
);
?>

Once that’s done, save the file and open up the DatabaseSeeder.php file. This is where you specify which seeders you want to execute whenever you execute the php artisan db:seed command. In this case we want to add the UserTableSeeder:

1
$this->call('UserTableSeeder');

Before we execute the php artisan db:seed command we will first need to reload the autoloaded files by executing the composer dump-autoload command. We need to do this every time we add a new seeder so that Lumen will take care of loading the seeder.

Getting Data

From your routes file you can now try fetching the users that we’ve added:

1
2
3
4
5
6
7
<?php
$app->get('/db-testing', function(){

    $users = DB::table('users')->get();
    return $users;
});
?>

With Lumen you can use the query builder, basic queries and even Eloquent. So if you already know how to work with those then you’re good to go.

Conclusion

That’s it! In this tutorial I’ve walked you through Lumen and how you can install, configure and work with the different functionalities that it can offer.