Wern Ancheta

Adventures in Web Development.

Newsletters I Subscribe To

| Comments

Following last weeks post on Podcasts I listen to. This time I’ll talk about some of the newsletters I subscribe to.

Ruby Weekly

Ruby Weekly is an email round-up of Ruby news and articles. This newsletter is mainly about Ruby but you can also find some interesting stuff here even if you’re not a Ruby developer. Links to articles about command line tools, databases and version control are also included in every issue.

Schedule: Every Friday Visit Site

JavaScript Weekly

JavaScript Weekly is an email round-up of interesting JavaScript news and articles. It also has a jobs section in which you can find jobs exclusively for JavaScript developers or engineers.

Schedule: Every Friday Visit Site

Webtools Weekly

Web Tools Weekly is a front-end development and web design newsletter with a focus on tools. Each issue features a brief tip or tutorial, followed by a weekly round-up of various apps, scripts, plugins, and other resources to help front-end developers solve problems and be more productive.

Schedule: Every Saturday Visit Site

Gamedev.js Weekly

Gamedev.js Weekly is a newsletter all about HTML5 game development. I’m not really a game developer myself so reading articles from this newsletter is just for me to have an idea how games for the browser are being developed.

Schedule: Every Saturday Visit Site

StackExchange Programmers Newsletter

A curated list of interesting programming questions from the programmers.stackexchange website.

Schedule: Every Saturday Visit Site

DB Weekly

A weekly round-up of database technology news and articles covering new developments, SQL, NoSQL, document databases, graph databases, and more.

Schedule: Every Friday Visit Site

Versioning

A daily newsletter by Sitepoint. If you are tired of keeping yourself updated via your twitter feed, hacker news, and a bunch of other sources then Versioning is for you. As they curate a bunch of links that web developers might find useful.

Visit Site

Hacker Newsletter

If you can’t keep up with Hacker News, Hacker Newsletter is the way to go. Kale Davis curates only the most interesting stuff that you might find on Hacker News.

Schedule: Every Friday Visit Site

Responsive Design Weekly

A free, once–weekly round-up of responsive design articles, tools, tips, tutorials and inspirational links.

Schedule: Every Friday Visit Site

Node Weekly

Node Weekly is a free, once–weekly e-mail round-up of Node.js news and articles. Its still from the same guy (Peter Cooper) who brought us the awesome JavaScript Weekly Newsletter. But this is primarily focused on Node.js.

Schedule: Every Friday Visit Site

WPMail.me

WPMail.me is a free WordPress Newsletter, once a week, with a round-up of WordPress news and articles.

Schedule: Every Friday Visit Site

PyCoder’s Weekly

A free weekly e-mail newsletter, on Fridays, for those interested in python development and various topics around python.

Schedule: Every Saturday Visit Site

Python Weekly

If PyCoder’s Weekly isn’t enough, Python Weekly has got you covered. Its a free weekly e-mail newsletter, for those interested in python development and various topics around python.

Schedule: Every Friday Visit Site

PHP Weekly

As the name suggests, PHP Weekly is a newsletter featuring the best articles, tutorials, talks, news, jobs, and tools about PHP. Based on my experience so far, every issue is a fat one. And when I say fat it means that there’s bunch of stuff in there for you to consume. You couldn’t ask for more from this newsletter. Its the best thing there is if you want to keep yourself updated about PHP stuff.

Schedule: Every Thursday Visit Site

Postgres Weekly

Postgres Weekly is a free, once–weekly e-mail round-up of PostgreSQL news and articles.

Schedule: Every Wednesday Visit Site

Web Design Update

Web Design Update is a plain text email digest dedicated to disseminating news and information about web design and development with emphasis on elements of user experience, accessibility, web standards and more

Schedule: Every Wednesday Visit Site

ng-newsletter

Ng-newsletter is a weekly newsletter of the best AngularJS content on the web.

Schedule: Every Wednesday Visit Site

Ember Weekly

Ember Weekly is a newsletter dedicated to bring you the latest Ember.js news, tips and libraries.

Schedule: Every Monday Visit Site

HTML5 Weekly

HTML5 Weekly is a newsletter that features HTML5 and Web Platform technology roundup, CSS 3, Canvas, WebSockets, WebGL, Native Client, and more. Basically all things HTML5 and related technologies.

Schedule: Every Wednesday Visit Site

Perl Weekly

Perl Weekly features hand-picked news and articles about Perl.

Schedule: Every Monday Visit Site

DevOps Weekly

Devops Weekly curates the best and latest articles all about DevOps. If you’re not familiar with DevOps, its basically a short term for Development and Operations. Its mainly focused on IT operations, tooling and collaboration.

Schedule: Every Monday Visit Site

UX Newsletter

Another newsletter from Stackexchange which features the most interesting questions about User Experience in the past week.

Schedule: Every Monday Visit Site

Web Developer Reading List

An all in one newsletter for web developers. It contains news on both front-end and back-end stuff.

Schedule: Every Friday Visit Site

CSS Weekly

A weekly e-mail roundup of css articles, tutorials, experiments and tools curated by Zoran Jambor. There’s not much in every issue but the quality makes up for it.

Schedule: Every Tuesday Visit Site

Podcasts I Listen To

| Comments

To take a bit of a break from the usual web development tutorials that I published. This week I’ll be talking about some of the podcasts that I usually listen to when I’m just chilling out doing nothing. Podcasts are a really good way to keep yourself updated as a developer even if you’re not in front of a computer.

Shoptalkshow

A podcast where they talk about front end web design, development and UX. Its hosted by Chris Coyier and Dave Rupert. Each week they either interview someone from the industry or have a Rapidfire show in which they answer questions submitted by their listeners.

Visit Site

Ruby Rogues

Not like what the name of the podcast suggests, Ruby Rogues isn’t exclusively for Ruby stuff. I’m not a Ruby Developer myself but I often listen to this podcast because they usually talk about general stuff that developers would want to listen to. Things like self-evaluation, staying sharp, and education. They also usually invite someone from the industry to be in the show so that’s a bonus as well. At the end of each episode they have a picks section in which each of the hosts pick anything that they want to plug into the show. Such as books, games, a random article.

Visit Site

JavaScript Jabber

Pretty much like the Ruby Rogues podcast because its created by the same guy: Charles Maxwood. They mostly invite JavaScript developers to talk about their projects. Such as Guillermo Rauch of Socket.io, Jo Liss of Broccoli.js.

Visit Site

Freelancer Show

Another podcast from Charles Maxwood, the Freelancer Show. As the name suggests, its a podcast about freelancing. If you’re looking into doing freelancing on your part-time or you want to do freelancing full-time then this podcast is for you.

Three Devs and a Maybe

A podcast about Web Development. Though if you visit their website they usually talk about PHP stuff. If you’re a PHP developer then this podcast should definitely be on your listening list.

Visit Site

This Developer’s Life

A podcast about developers and their lives. Though this podcast is not ongoing anymore, most of the things that you’ll find in here are still relevant. Its about the daily lives of developers after all. Their content is mostly on story format. And each episode has a specific theme. Things like obsession, learning, competition, getting fired and many others.

Visit Site

The JavaScript Show

Though this podcast is no longer active, they have some good stuff in here that you might want to check out. Each episode is fully dedicated to JavaScript stuff, that is both client-side and server-side. Its from the same guy (Peter Cooper) who’s curating the contents for the JavaScript Weekly Newsletter so the JavaScript Show is basically a JavaScript Weekly in audio format.

Visit Site

FaceOff Show

Another podcast which is no longer active but still pretty useful is the FaceOff Show. They have a total of 126 episodes before they stopped but the content is still available on their website to download or to listen to. The FaceOff Show is a holistic podcast, its basically all of the podcast mentioned above combined into one. In other words, its all thing development.

Visit Site

Getting Started With Amazon EC2

| Comments

In this tutorial I’m going to give you an introduction on how to setup an Amazon EC2 instance that uses the LAMP stack. This tutorial assumes that you already have an AWS account setup.

Setting up the instance

The first thing that you need to do is to login to your AWS account. Once logged in, click on the instances link found on the left side of the screen. Once in the instances page, click on the ‘Launch Instance’ button. You will then be redirected to the page where you can select the operating system that will be used for the instance that you want to create:

choose AMI

If you’re using Ubuntu for your development, it would be much easier for you if you also select the Ubuntu Server, the 64-bit version if preferred. Just click on the ‘select’ button beside the Ubuntu instance.

Next, we need to select the instance type. For starters you may want to try the t2.micro instance as its eligible for the free tier, this means that you don’t have to pay anything when you launch this type instance.

choose instance

If you’re looking into launching an instance which exactly fits your needs, check out ec2instances.info. Note that an instance that’s not eligible for free tier would cost you per hour so be really careful with the instance that you select.

Once you’re done selecting the instance type, click on the ‘Next: Configure Instance Details’, that will redirect you to the page where you can configure details about your instance. Things like the Virtual Private Cloud, Subnet and Public IP. Usually you don’t really have to touch these settings so just leave the default ones.

configure instance

Next click on the ‘Next: Add Storage’ button. That will redirect you to the page where you can configure the size and volume type of the storage that will be used for your instance. Just input 30 for the size as free tiers are eligible for up to 30 GB. If you have selected something higher than the free tier, you can find information on how much storage size you can have at ec2instances.info. For the volume type, just use the general purpose SSD.

add storage

Next click on the ‘Next: Tag Instance’ button. That will redirect you to the page where you can assign a key-value pair to your instance. This allows you to tag your instance with those key-value pairs which enables you to categorize your AWS resources in different ways. We won’t really be using tags in this tutorial so if you want to learn more about tagging your instance, check out the official docs.

tag instance

Next click on the ‘Next: Configure Security Group’ button. That will redirect you to the page where you can configure the security group used by the instance. In simple terms, security groups allows you to set the ports used by your instance and which IP addresses are allowed access to those ports. You can assign different settings for inbound and outbound rules. Inbound rules are the settings used for requests made to your server by other computers.

For inbound rules you would commonly have the following settings:

  • Type: SSH – this allows you to access your instance via SSH.
  • Protocol: TCP
  • Port: 22
  • Source: 0.0.0.0/0 – if you got a static IP assigned to your computer, its more secure if you set that IP for this field. Otherwise just select ‘Anywhere’ which allows access to any IP.

  • Type: – HTTP – this allows you to access your instance from the browser.

  • Protocol: TCP
  • Port: 80
  • Source: 0.0.0.0/0 – this means anyone which has access to the internet can access your instance via the DNS provided by Amazon or the public IP assigned to your instance.

For outbound rules:

  • Type: – HTTP – this allows your instance to download stuff from the internet.
  • Protocol: TCP
  • Port: 80
  • Destination: 0.0.0.0/0 – this means that your instance can make the request to any server.

  • Type: MYSQL – this allows your instance to make a request to the MySQL server.

  • Protocol: TCP
  • Port: 3306
  • Destination: 0.0.0.0/0 – this allows your instance access to any MySQL server. You can also set this to the private IP of your instance. You can only specify a single IP so if you’re planning to access other MySQL servers aside from the one installed on your ec2 instance then just select ‘Anywhere’.

That’s pretty much it.

You can learn more about security groups in this page: Amazon EC2 Security Groups

Once you’re done configuring the security group, click on ‘Review and Launch’ button. You can now review the details of the instance, once you’re done reviewing just click on the ‘Launch’ button. Amazon will now prompt you to create an ssh key or use an existing key if you already have an existing one. You can use the ssh key to authenticate yourself when logging in to your instance via ssh. Keep the ssh key somewhere where you can easily find it. For me I prefer putting it in the ~/.ssh directory.

Installing Software

Now that you have launch the instance you can now access it via ssh. To do that, login to your amazon account, click the ‘services’ link on the upper left corner of the screen, hover on the ‘All AWS Services’ link then click on ‘EC2’. That will redirect you to the ec2 dashboard page. Once you’re there, click on the ‘instances’ link. This will list out all the instances that you have created in the current region that you have selected. If nothing is listed on that page the instance that you have created might be on another region. To change the region you can click on the second link from the right. The one which looks like a place in the world. Select any of the places listed in there and your instance will be listed in any of those. Next click on the instance listed then copy the value for the ‘Public DNS’. Open up a terminal, cd into the directory where you have your ssh key then execute the following command:

1
ssh -i amazon-aws.pem ubuntu@the-public-dns-of-your-instance

Breaking the command down, -i allows you to specify the ssh key file. In this case the file name is amazon-aws.pem. Next is the username of the user you want to use to login, in this case the username is ubuntu. That’s the default username for Ubuntu ec2 instances. Next is @ followed by the public dns of your instance. If you have already assigned a domain name to your instance you can also use that.

Once you’re logged in you can now start installing software. Ec2 instances doesn’t come pre-installed with Apache, PHP and MySQL. So you would need to install it yourself. Here are some of the software that I usually install on an ec2 instance:

Curl:

1
2
3
sudo apt-get install curl
sudo apt-get update
sudo apt-get install libcurl3 php5-curl

Composer:

1
2
curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer

Apache:

1
2
sudo apt-get install apache2
sudo /etc/init.d/apache2 restart

PHP:

1
2
sudo apt-get install php5
sudo apt-get install libapache2-mod-php5

MySQL:

1
2
sudo apt-get install mysql-server
sudo apt-get install php5-mysql

Configuring Apache

Once everything is installed you still have to configure Apache to use a different web directory. This is because the default one isn’t really that friendly. As you have to sudo every time you need to save or update something from the directory. My preferred directory is one that is on the home directory. As you won’t need any special privileges to do anything inside of it. To configure Apache to use a different directory, cd into the /etc/apache2 directory then open up the apache2.conf file. You can open up the file using a text editor like nano, vi or vim. Once you’re in that directory open up the file using the text editor of your choice:

1
sudo nano apache2.conf

Now look for the Directory directives and update it to use a value similar to the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<Directory />
        Options FollowSymLinks
        AllowOverride None
        Require all denied
</Directory>

<Directory /usr/share>
        AllowOverride None
        Require all granted
</Directory>

<Directory /home/ubuntu/www>
        Options Indexes FollowSymLinks
        AllowOverride All
        Require all granted
</Directory>

For the configuration file above were using /home/ubuntu/www as the web root directory. You can change this to any directory on your home folder. Just be sure that the directory exists.

Still on the same directory, cd into the sites-available directory then open up the 000-default.conf file. Look for the DocumentRoot directive and specify the path to your web root directory.

1
DocumentRoot /home/ubuntu/www

Once everything is done, restart Apache using the following command:

1
sudo service apache2 restart

Conclusion

That’s it! In this tutorial you have learned how to set up an ec2 instance, install software needed to host a website. You can use the free tier to quickly test out an app idea and bring it online for everyone to test out.

Getting Started With Supervisor

| Comments

Recently in my work I had this node.js script that I had to run persistently. Its basically a server that will generate images based on some JSON data that’s passed from the client side. So I did some searching and found Supervisor, a process control system. It allows you to run programs persistently.

Installation

You can install install Supervisor by executing the following command in your terminal:

1
sudo apt-get install supervisor

Configuration

Once the installation is done, you can now create the config file. This is where you specify which script you want to run, the directory in where you want to run it, and a log file in which the output is redirected.

1
sudo nano /etc/supervisor/conf.d/image-creator.conf

Here’s what a config file looks like:

1
2
3
4
5
[program:imagecreator]
command=node image-creator.js
directory=/home/ubuntu/www
stdout_logfile=/home/ubuntu/logs/image-creator.log
redirect_stderr=true

Breaking it down. This is where we set the name of the program. Yes the program: is always there, only the thing that comes after it is updated. In this case the name of the program that I gave is imagecreator.

1
[program:imagecreator]

Next is the command that you execute when you’re running the program in the terminal. In this case were executing the script via the node command:

1
command=node image-creator.js

Next is the directory where the program is stored. This can also be the directory where you want to execute the program:

1
directory=/home/ubuntu/www

This is where you specify the file where you want to redirect the output of the program:

1
stdout_logfile=/home/ubuntu/logs/image-creator.log

Lastly, we specify whether to send back the stderr output to supervisord on its stdout file descriptor:

1
redirect_stderr=true

That’s pretty much all we need for the configuration file. You can go ahead and save it. If you want to specify more settings, check out the docs on configuration

Adding the Process

Now that we have a configuration file in place we can now tell supervisor to add it to the list of processes that it currently manages. You can do that by using supervisorctl:

1
sudo supervisorctl

Executing the command above will let you enter the supervisor program. Next execute the following commands in order:

1
2
3
reread
add imagecreator
start imagecreator

Breaking it down:

  • reread tells supervisor to read the configuration files that are available.
  • add tells supervisor to add the program into the list of programs that it will manage.
  • start tells supervisor to run the program.

Conclusion

That’s it! Supervisor is a neat little program that allows you to run programs persistently. Just be sure that errors are handled accordingly because supervisor wouldn’t continue running your program if an error occurs while its running.

Using Datatables With Laravel

| Comments

In this tutorial I’ll be walking you through how you can use datatables in Laravel. But first, let me give you a quick intro on what Datatabes is. Datatables is basically a jQuery plugin that allows you to add advanced interaction controls to your HTML tables. Things like search, pagination, sorting, and ordering. Datatables allows you to add those kinds of functionality into your tables with minimal code.

In this tutorial were going to be using a Laravel package called Chumper. Chumper allows us to easily create Datatables which uses the data returned from a model as its data source.

First thing that you need to do is to add the following in your composer.json file:

1
2
3
"require": {
  "chumper/datatable": "2.*",
}

If you got other packages that you need for your project, just add it on the last part of the require item. Once you’re done with that, execute composer update from your terminal to install Chumper.

Once composer finishes installing Chumper, add the service provider for Chumper into the providers array in your app.php file inside the app/config directory of your Laravel installation:

1
'Chumper\Datatable\DatatableServiceProvider',

Still inside the app.php file, also add the following under the aliases array:

1
'Datatable' => 'Chumper\Datatable\Facades\DatatableFacade',

Once that’s done, you can now create the main configuration file by executing the following from the terminal:

1
php artisan config:publish chumper/datatable

The main configuration file is stored under app/config/packages/chumper/datatable/config.php so go ahead and edit that if you want to change the default settings provided by Chumper. Things like the class or ID given to the tables generated can be configured from that file. This is particularly useful if you want to use classes or IDs to style the datatables in a specific way. Other than that the default settings can be used for most cases.

Now that we have configured Chumper, we can now add a route that will return the page where the datatable is displayed in your routes.php file. In the example below, we have the a controller called AdminController and were using the data returned by the users method as a response whenever the users route is accessed via the GET method:

1
2
3
<?php
Route::get('users', 'AdminController@users');
?>

Next we also need to add the route that will return the data into the client side. By default, Chumper uses the server for processing queries made through the datatable. This means that it only gets the actual data that is needed instead of getting all of the records in the database table that you specify. In the code below, were giving a name of api.users to the api/users route so that we can refer to it later in the controller. The uses keyword allows you to specify a controller action to the route. Its basically the same thing as what we did above but that’s the way to do it if you’re using named routes.

1
2
3
<?php
Route::get('api/users', array('as' => 'api.users', 'uses' => 'AdminController@getUsersDataTable'));
?>

Under your controller, here’s the method that returns the page where the datatable is displayed:

1
2
3
4
5
6
7
8
9
10
11
<?php
public function users(){

    $table = Datatable::table()
      ->addColumn('Name', 'Last Login', 'View')
      ->setUrl(route('api.users'))
      ->noScript();

    $this->layout->content = View::make('admin.users', array('table' => $table));
}
?>

The code above assumes that you’re using Laravel layouts. If you don’t know how to use layouts in Laravel, be sure to check out the docs. Breaking the code down, the following code allows you create the datatable. You can add columns to it by using the addColumn method. This method takes up the names that you want to give to the header for each field in the table. The setUrl method allows you to set the route that the datatable will use for processing queries made through it. Earlier we created a route and named it api.users so in the setUrl method all we have to do is to use the route method and then supply the name of the route which is responsible for returning the data for processing the queries. Lastly, we call the noScript() method to specify that we don’t want to add the JavaScript code in the response that will be returned.

1
2
3
4
5
6
<?php
$table = Datatable::table()
  ->addColumn('Name', 'Last Login', 'View')
  ->setUrl(route('api.users'))
  ->noScript();
?>

Next is the method which processes the queries made through the datatable:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<?php
public function getUsersDataTable(){

    $query = User::select('name', 'active', 'last_login', 'id')->get();

    return Datatable::collection($query)
        ->addColumn('last_login', function($model){
            return date('M j, Y h:i A', strtotime($model->last_login));
        })
        ->addColumn('id', function($model){
            return '<a href="/users/' . $model->id . '">view</a>';
        })
        ->searchColumns('name', 'last_login')
        ->orderColumns('name', 'last_login')
        ->make();
}
?>

Breaking it down, the code below allows you to specify the fields that you want to use for the response. These are the actual field names in your database table:

1
2
3
<?php
$query = User::select('name', 'last_login', 'id')->get();
?>

Next, we return the actual data using the collection method in the Datatable class. Well, not actually the Datatable class, since its just the Facade that we used earlier in the app.php file. The collection method requires the result set returned by our query to the users table earlier so we just set that as the argument. After that, we can call the addColumn method to update the presentation of the data returned for that specific field. In the case of the last_login field, its stored in the database as a time stamp which looks like this: 2014-07-29 11:37:39. We don’t really want to present that to the user like that so we format it using the date method. The first argument is the format that you want. In this case we want something like this: Jul 29, 2014 11:37 AM. Looking at the official docs, we know that we can do that by specifying the following: M j, Y h:i A. The second argument is a unix timestamp. We can convert the raw data that came from the database into a unix timestamp by using the strtotime method, so we do just that. Next is the id field. We don’t actually want to display the users id to the user, what we want is to display a link that would lead the user to the page where more details for the user can be viewed. Thus we return an HTML anchor tag which uses the id as one of the component for the actual link.

1
2
3
4
5
6
7
8
9
<?php
return Datatable::collection($query)
    ->addColumn('last_login', function($model){
        return date('M j, Y h:i A', strtotime($model->last_login));
    })
    ->addColumn('id', function($model){
        return '<a href="/users/' . $model->id . '">view</a>';
    })
?>

Lastly, we can now display the datatable in our view. If you’re using Twitter Bootstrap, it should look similar to this one:

1
2
3
4
5
6
7
8
9
10
@section('content')

<div class="row">
  <div class="col-md-12">
  <h3>Users</h3>
  {{ $table->render() }}
  {{ $table->script() }}
  </div>
</div>
@stop

Yup! as simple as that! All we have to do is to call the render() method to render the actual datatabase. And then we also call the script() method to render the JavaScript file that would do the talking to the server every time the user interacts with the table.

Introduction to Contact Plugin for Octopress

| Comments

In this blog post I’ll be introducing the Contact plugin for Octopress. This plugin allows you to create contact forms with ease. Its using the pooleapp.com for saving the data for the forms that are submitted.

Create a Pooleapp account

First lets go through pooleapp. Poole is a free, hosted data store for static sites. It allows you to post data into it and then later on you can retrieve the data using a simple API.

You don’t have to register to start using pooleapp but its recommended so that you can keep track of the forms that you create. Another bonus feature is that when someones submits a data to your contact form, pooleapp will immediately notify you via email.

Once you’ve registered an account, you can now create a new form. Just give your form a unique name and click on the ‘create form’ button. Once created, pooleapp will ask you for the email in which you want the notifications to be sent to.

Installing the plugin

Octopress doesn’t really have a plugin system so we’ll have to do things manually. First thing that you need to do is to add the contact.rb file into the octopress/plugins directory.

So that we can show a success message once the visitor submits his data through the contact form, we also need to add the contact.js file inside the source/javascripts directory. Basically what it does is to check for the existence of the form query parameter. If it exists then it makes the success message visible.

For the styling, add the _contact.scss file inside the sass/partials directory. Then in your sass/_partials.scss file, import the css for the contact form by adding the following on the last line:

1
@import "partials/contact";

Lastly, under the source/_includes/custom directory, add a script tag that points out to the contact.js file on the last line:

1
<script src="/javascripts/contact.js"></script>

Using the plugin

To use the plugin in any of your pages, simply use the contact liquid tag then supply your pooleapp API key as the first argument, and the redirect URL for when the form is submitted:

1
{% contact YOUR_POOLE_APP_API_KEY http://YOURSITE.COM/PAGE?form=ok#alert-box %}

Demo

You can try out the demo on the about me page of this blog.

Setting Up SSL on Apache

| Comments

In this blog post I’ll walk you through setting up SSL on Apache. When talking about SSL the popular choice is OpenSSL, an open source toolkit for implementing Secure Sockets Layer (SSL) and Transport Layer Security (TLS). So we will be using OpenSSL for this tutorial.

Install OpenSSL

The first thing that you need to do is to determine the latest version of OpenSSL from the sources page. Its usually the one that has a red color. Once you find that, copy its address then use wget to download it to your preferred directory:

1
wget http://www.openssl.org/source/openssl-1.0.1h.tar.gz

Next create the directory where you want to install openssl:

1
mkdir /usr/local/openssl

Extract the archive:

1
tar -xvzf openssl-1.0.1h.tar.gz

Then cd into it:

1
cd openssl-1.0.1h

Next execute the config command to set the installation path for openssl and check for any errors. This should be the same as the directory you created earlier:

1
./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl

Next execute make to compile the source code. If this doesn’t work for you try adding sudo before the actual command. After make is done and there aren’t any errors you can now execute make install to install the source files in there appropriate directories.

Once that’s done you can verify that openssl is successfully installed by executing the following command:

1
/usr/local/openssl/bin/openssl version

Generate Keys

Once you’re done with installing openssl you can now assign it to a variable:

1
export OpenSSL_HOME=/usr/local/openssl

And then add it to your system path:

1
export PATH=$PATH:$OpenSSL_HOME/bin

Next create a private key:

1
openssl genrsa 2048 > privatekey.pem

In the above command genrsa 2048 tells openssl to generate an RSA key that is 2048 bits long. RSA is basically just an algorithm used for encryption.

Next create a CSR (Certificate Signing Request) using the private key that we have just generated:

1
openssl req -new -key privatekey.pem -out csr.pem

The command above will ask for the following:

  • Country Name – use the 2 letter abbreviation of your country name
  • State or Province – (e.g California)
  • Locality Name – (e.g Palm Desert)
  • Organization Name – name of your company
  • Organization Unit – name of website
  • Common Name – domain name of website (e.g mywebsite.com)
  • Email Address – your email address

The information above will be used for the certificate that will be assigned to you later on so be sure to supply the correct information.

Enable SSL on Apache

Now that we have generated all the keys we need we can now configure apache to use those keys. First you have to enable the SSL module by executing the following command:

1
sudo a2enmod ssl

Then restart apache for changes to take effect:

1
sudo service apache2 restart

Next edit the ssl configuration file for apache:

1
sudo nano /etc/apache2/sites-available/default-ssl.conf

Comment out the following lines by adding a pound (#) sign before them:

1
2
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key

Next look for the following line:

1
<VirtualHost _default_:443>

And then under it set the server information:

1
2
3
4
ServerAdmin admin@mywebsite.com
ServerName mywebsite.com
ServerAlias www.mywebsite.com
DocumentRoot /home/www

Next look for SSLEngine On and then under it add the following:

1
2
SSLCertificateFile /home/wern/signed-certificate.crt 
SSLCertificateKeyFile /home/wern/privatekey.pem 

The SSLCertificateFile is where you specify the path to your websites digital certificate. I didn’t cover this step because there are a lot of certificate authorities out there. So far I’ve only tried with Namecheap and its pretty easy to acquire a certificate from them. Just create an account and then log in to it. Once you’re logged in just click on the security menu then select SSL certificates. From there just click on the button under the domain validation, add your preferred certificate to the cart and then just go through the steps. Once you have purchased a certificate just hover over your user name on the upper left side of the screen and then select manage ssl certificates. That will bring you to the page where all your certificates are listed. By default its just sitting there waiting to be configured. So all you have to do is configure it then select Apache + OpenSSL when it asks for your server configuration. And then it will ask for the csr. Just copy the contents of the csr.pem file that we generated earlier and paste it on the textarea which is asking for it. After that just click on submit and go through the steps provided by namecheap. Once everything is ok namecheap will send you the certificate via email. Just copy it and then save it on your server. The path to that file is what you need to assign to the SSLCertificateFile in apache. Next is the SSLCertificateKeyFile that’s the path to your private key. In our case its the privatekey.pem file.

Once that’s done you just have to enable it:

1
sudo a2ensite default-ssl.conf

And then restart apache so that the changes will take effect:

1
sudo service apache2 restart

That’s it! Enjoy your new https enabled website. The next step would be to redirect all http request to https but I’ll leave that one to you.

How I Work With Clients

| Comments

In this post I’m going to share some of the things I usually do when working with my clients.

Getting Projects

First off I’m not actively looking for work as I already have a full time job so I usually let potential clients to contact me for projects. My contact details are in the about me page and they can just contact me via my primary email or skype. I have twitter but I usually do not entertain people who contact me there. There’s also linked in but most of my contacts are recruiters which is no good because they usually come at you for full time jobs at a physical office somewhere.

Now that you know how I get client work its time to proceed with the how. So first thing that happens is that I receive an email or a skype contact request with some project details in it. Something like:

Hey I read your blog post on {Some blog post I’ve written before} and I think you would be able to do this project. {An overview of the project}. Is this something you’re interested in doing for us?

Depending on my current work load and how interesting the project is I either decline or accept the project. If I still got a bit of time and the project is interesting I usually say yes. If its not something interesting and I got a lot of free time after work I say no. I don’t really like doing something I don’t enjoy just for the sake of some cash.

Once I decided to accept the project I send an email saying that I accept the project. Here’s a template that I usually go with:

Hi {first name of client},

Yes I’m interested in this project. However I currently have a full time job thus I won’t be able to work on this project full time. I can only do this after I’m done with my work or on some free time on weekends. If you’re ok with this then I’ll happily accept this project.

Regards,
Wern

As you can see above I always try to make it clear of my current occupation. Whether I currently have a job or another project that I’m working on. If the potential client is ok with it only then that the project begins.

Introductory Email

On the beginning of the project I usually send an email to introduce myself and some of the guidelines and process that I follow when working on a project. Something like this:

Hi {first name of client},

Thank you for understanding the situation. I can begin doing the project starting tomorrow. But first here are some guidelines that I follow when working on a project:

- First. All things that have something to do with the project should be added on Trello, a web-based project management software. I’ve already invited to it, please accept my invitation so you can familiarize yourself with it. If you have any questions, suggestions or clarifications regarding the project please add them on Trello.
- Second. If you need to talk to me you can contact me on Skype but first send an email that you want to talk to me and I’ll try to look for a good time to talk. Here is my skype user name: wernancheta
- Third. I may not always be available so please understand that I can’t always immediately reply to an email or a question on Trello.
- Fourth. I usually put a number of features into a group. Once a specific group is satisfactorily completed I ask for a payment.
- Fifth. Estimates cover up to 3 small revisions for each feature. Small revisions doesn’t take more than 10 minutes to do. Anything that will take longer than that I’ll have to charge an additional fee.

Regards,
Wern

This usually goes smoothly and the client says ok.

Trello Workflow

Next is the Trello workflow. What I do is stick with the following list:

  • To do – this is where I put in items that we have talked about with my client.
  • Wont do – items that we have decided not to do. The usual reasons are that the client no longer wants the feature or it will be postponed at a later time.
  • Doing – items that are from the To do list that I’m currently working on.
  • Done – items that I believe are already done. I usually manually test the items before I move them to this list. When there are issues with the items the client can just comment their issue into the specific item. Once I found out that its a real issue that needs to be worked on then I move the item back into the Doing list.
  • Proposals – this contains the features that I consider necessary which the client didn’t mention. Items from here gets moved to the To do list once I get the clients approval.
  • Other Info – anything else about the project that doesn’t belong to any of the above. Initially this is where I put a quick tutorial about how to use Trello.

On each of the list I put in a README card to guide the client what each list is for.

Trello is great for clients who loves asking for project progress every second. Upon looking at Trello they already have an idea what still needs to be done, what I’m currently working on and what else I have to do.

Development

When developing I usually push the files into Openshift because they offer free hosting up to 3 projects. Database is also covered so its really sweet considering the fact that its free. By using Openshift I can also ensure that my clients can’t just run away with the source code and call it a day. If I have already established a certain amount of trust with client and they have a server where I can put the source code then I use their server instead.

Payments

Lastly there’s the payments. I don’t receive payments up front. This is how I establish trust to the client. So if the client is not some kind of heartless villain who enjoys not paying for someone’s service I can usually expect them to pay. What I do is group the features that I’ll be working on into 2, 3 or 4 groups depending on the number of features. I usually arrive with 4 groups. This means that I’ll be asking the client for payment 4 times. Once the first group is satisfactorily done without issues I email my client. I go with the following template:

Hi {first name of client},

Here’s the break down for the {name of group}:

{List of features here}

Total: {total price}

You can pay in this email with paypal: {my paypal email address}

Regards,
Wern

That’s it! You might have noticed that I didn’t mention anything about contracts. That’s because I don’t do contracts. I believe contracts just gives you the power to sue someone and go to court. Because I usually work remotely I don’t think I can go to court if my client is on the other side of the world. So if they don’t pay I’ll just pray for their souls.

What I’ve Been Up to Lately

| Comments

You might have noticed that I no longer publish new blog posts as frequently as I have before. That is because I’ve been busy with other stuff lately. It all started when I joined Islick Media last March. My job at Islick Media is pretty much the same as a regular job where you work 8 hours a day, 5 days a week. Nothing out of the ordinary.

Then I got an unexpected project from someone who read my blog post on Amazon Product Advertising API. I was hesitant at first because I’m already happy with my job and I am happy with my salary. After some pondering I thought that extra income would be nice so I tried to give it a shot and I emailed the person back, making it clear that I currently have a full time job and that I would only be able to do this project on my free time. The person then replied back saying that its ok. Then the rest is history. I got the project last April and until now the project is still ongoing so most of my free time goes into that.

Going back to the month of February I also tried emailing Sitepoint, a company dedicated to making awesome articles on web development. It was pretty much a cold email saying that I wanted to write for them. That I’ve been writing articles about web development for a while but I’ve only been doing it on my blog and that I wanted to try and make money doing it. I’ve waited but I didn’t get a reply after a week so I thought they’re not interested. But then after exactly a month, the managing editor of Sitepoint PHP Channel emailed me back with an apology for not getting back to me sooner. But the important part is that I got an ok. And man! that was the most awesome feeling ever! Sitepoint is one of the most popular websites which publishes resources (books, articles, courses) on web development. The fact that I get to write for them is really just awesome.

Lastly, I’m also occupied with my personal project hoping that it would turn into a nice source of passive income. I can’t tell something about the project yet but once I get it out there I’ll be publishing a blog post about it so stay tuned for that.

And that’s pretty much what I’ve been up to lately. I don’t think I’ll be able to write anything lengthy on blog soon. But I’ll try publishing some short tutorials so I still have fresh content in my blog even if I’m busy. But basically the series on the Whirlwind tour on Web Developer Tools isn’t going to continue soon. I’d like to provide as much information as I could on each part of the series. But I don’t think I have time to write lengthy posts so I’m going to temporarily stop the series.

That’s it for this blog post. At times like this I really wish the Hyperbolic Time Chamber was for real so I don’t need to prioritize things and just do everything I want to do.

Things I Learned on My Third Job

| Comments

Its been 3 months since I joined Islick Media, a Web Development shop based in Palm Desert, California. Just like with my previous jobs I work for them as a remote worker. In this blog post I’ll be sharing some of the things I’ve learned on the job.

Synxis

Synxis is a reservation system. Its a pain in the neck to work with this one. Any code which has something to do with their reservation features are not accessible. At most you can only update the HTML for the header and footer part of the page. Uploading new files is also painful as you either have to install Java so you can run their image uploader or suck it up and upload files one by one.

Wordpress Theme Customization API

I’ve worked with the Wordpress Theme Customization API on my first project on the company. I’ve used it to give the users of the Wordpress theme that I’ve created a simple way of customizing the look and feel of the theme. Things like customizing the color of links, header and background images can go a long way in making your Wordpress theme easily customizable to non-programmers.

Zillow

Zillow is a home and real estate marketplace dedicated to helping homeowners, home buyers, sellers, renters, real estate agents, mortgage professionals, landlords and property managers find and share vital information about homes, real estate, mortgages and home improvement. I’ve used their API in providing zestimates (zillow estimates) for real properties.

Laravel

This is not the first time that I’ve learned about Laravel. Its some sort of a reacquaintance since I first used it in the year 2012 where it was only newly released. Fast-forward to 2014 there’s already a bunch of stuff that has changed and improved. Some of my previous knowledge were still of use but I also had to learn new stuff and new way of doing things. I’ve learned about the IoC container, and how to make use of external classes the laravel way. I also learned about the authentication class which makes writing the login functionality for your app a breeze.

Mailing Services

Mandrill and Mailgun are mailing services that I’ve used for sending out emails for my projects. Yes you can pretty much use the built-in mailing server on the server where your app is hosted. But the main advantage of using a mailing service over the built-in mailing server is authentication. With mailing services such as Mandrill or Mailgun you get the benefit of having your email come from a reputable server. This leads to a higher rate of the emails actually making it into your customers inbox and not the spam.

SPF and DKIM

SPF and DKIM is a way to authenticate mailing services such as Mandrill and Mailgun to send on behalf of your server. So you can get a cool looking email like: awesomeness@coolness.com to work and actually make it to your customers inbox.

Amazon EC2

Short for Amazon Elastic Compute Cloud. Its basically a cloud computing platform. You can use it to host web applications and scaling is already taken care of. You can start with a fairly low performance and storage capacity server instance. And once you’ve met a certain point where the current instance is no longer performing well. You can just upgrade your current instance and all your stuff would still be in there.

Stripe

Stripe is a company that provides a set of APIs for enabling businesses to accept and manage online payments. They have SDKs (Software Development Kit) available for different programming languages. Which is nice since no matter what programming language you’re using to write your app you can use the SDK to easily talk with the Stripe API. Stripe uses credit card for payments. One time payment and subscription based payments are automatically handled for you.

Twilio

Twilio is a cloud communications company. They allow developers to provide SMS and Voice functionality to websites. I used twilio in my second project (Vmonial) with the company. Vmonial is an app that allows businesses to accept voice testimonials from their clients. I used the Twilio Voice API on the project. You can basically control the flow using XML files (TwiML) which uses tags like <Say>, <Record>, <Play>, <Gather> and <Response>.

WHM

WHM is sort of the big mama of cpanel. This is where you can manage cpanel instances, users, third party extensions, and lots of other stuff for managing a server.

Elastic Search

Elastic search is an open source, distributed and RESTful search engine. Its like Apache Solr which I’ve written about a few times in this blog. Lots of people says really good things about Elastic Search that’s why I gave it a try on my third project (Roof99) to handle the search. MySQL was not a choice since its a database and it would be terribly slow for searching. Elastic search on the other hand is a search index. Documents are stored in JSON format and querying can be done by using REST (Representational state transfer) calls.

Prediction IO

Prediction IO is an open source machine learning server. You can use it for creating personalized applications. With Prediction IO you can predict your users behavior, offer personalized content (E.g news, ads, jobs), help them discover things that they might like. All of this can be done by having the server silently record the users activity within your app such as viewing, liking, disliking, and rating something.

Phonegap / Cordova

Phonegap allows developers for creating mobile apps using web technologies (HTML, CSS, JavaScript). Installing stuff for compiling those HTML, CSS, and JavaScript files is really a pain. Sometimes you get an error that takes hours to solve. Thankfully there’s the Phonegap build service by Adobe that allows you to upload your source files and then after a second or two you can readily download the app installers for devices that you support. This is pretty neat since all you have to do is to write HTML, CSS, and JavaScript code like you always do, upload it to Phonegap build and boom! you now have an installer for every mobile app that you support. A QR code is also generated every time you update the source code of your app. You can then just use your phone or tablet’s QR code reader and it will directly download the installer provided you’re connected to the internet. There’s also hydration which allows you to easily update already installed apps. So if you upload a new version of your app on Phonegap build, and then you open up the app on the mobile device hydration will detect the updates and then it will ask you to update the app or not. So no more need to re-install the app every time a new version is uploaded. Lastly there’s also debugging tools provided that allows you to debug the current instance of the app on your mobile device from the browser. This is all really sweet and awesome but we still need to think about performance, app permissions, and writing the code in such a way that it will be easily maintainable. There’s also this mobile development mindset that you have to get into. What I’m saying is that you shouldn’t really write Phonegap apps the way you write web applications. Because the environment is different. In a browser environment clicking on the link will load up a new page but in an app what it will do is open up the browser and then navigate to that link. So basically most of the things that you need to perform in the server side will have to be done using AJAX requests. Updating the UI can be done by using templates and so on.

That’s it! for now. In the coming months I’ll be updating this post and share some more of the things I’ve learned on my current job.