Wern Ancheta

Adventures in Web Development.

Getting Started With Supervisor

| Comments

Recently in my work I had this node.js script that I had to run persistently. Its basically a server that will generate images based on some JSON data that’s passed from the client side. So I did some searching and found Supervisor, a process control system. It allows you to run programs persistently.

Installation

You can install install Supervisor by executing the following command in your terminal:

1
sudo apt-get install supervisor

Configuration

Once the installation is done, you can now create the config file. This is where you specify which script you want to run, the directory in where you want to run it, and a log file in which the output is redirected.

1
sudo nano /etc/supervisor/conf.d/image-creator.conf

Here’s what a config file looks like:

1
2
3
4
5
[program:imagecreator]
command=node image-creator.js
directory=/home/ubuntu/www
stdout_logfile=/home/ubuntu/logs/image-creator.log
redirect_stderr=true

Breaking it down. This is where we set the name of the program. Yes the program: is always there, only the thing that comes after it is updated. In this case the name of the program that I gave is imagecreator.

1
[program:imagecreator]

Next is the command that you execute when you’re running the program in the terminal. In this case were executing the script via the node command:

1
command=node image-creator.js

Next is the directory where the program is stored. This can also be the directory where you want to execute the program:

1
directory=/home/ubuntu/www

This is where you specify the file where you want to redirect the output of the program:

1
stdout_logfile=/home/ubuntu/logs/image-creator.log

Lastly, we specify whether to send back the stderr output to supervisord on its stdout file descriptor:

1
redirect_stderr=true

That’s pretty much all we need for the configuration file. You can go ahead and save it. If you want to specify more settings, check out the docs on configuration

Adding the Process

Now that we have a configuration file in place we can now tell supervisor to add it to the list of processes that it currently manages. You can do that by using supervisorctl:

1
sudo supervisorctl

Executing the command above will let you enter the supervisor program. Next execute the following commands in order:

1
2
3
reread
add imagecreator
start imagecreator

Breaking it down:

  • reread tells supervisor to read the configuration files that are available.
  • add tells supervisor to add the program into the list of programs that it will manage.
  • start tells supervisor to run the program.

Conclusion

That’s it! Supervisor is a neat little program that allows you to run programs persistently. Just be sure that errors are handled accordingly because supervisor wouldn’t continue running your program if an error occurs while its running.

Using Datatables With Laravel

| Comments

In this tutorial I’ll be walking you through how you can use datatables in Laravel. But first, let me give you a quick intro on what Datatabes is. Datatables is basically a jQuery plugin that allows you to add advanced interaction controls to your HTML tables. Things like search, pagination, sorting, and ordering. Datatables allows you to add those kinds of functionality into your tables with minimal code.

In this tutorial were going to be using a Laravel package called Chumper. Chumper allows us to easily create Datatables which uses the data returned from a model as its data source.

First thing that you need to do is to add the following in your composer.json file:

1
2
3
"require": {
  "chumper/datatable": "2.*",
}

If you got other packages that you need for your project, just add it on the last part of the require item. Once you’re done with that, execute composer update from your terminal to install Chumper.

Once composer finishes installing Chumper, add the service provider for Chumper into the providers array in your app.php file inside the app/config directory of your Laravel installation:

1
'Chumper\Datatable\DatatableServiceProvider',

Still inside the app.php file, also add the following under the aliases array:

1
'Datatable' => 'Chumper\Datatable\Facades\DatatableFacade',

Once that’s done, you can now create the main configuration file by executing the following from the terminal:

1
php artisan config:publish chumper/datatable

The main configuration file is stored under app/config/packages/chumper/datatable/config.php so go ahead and edit that if you want to change the default settings provided by Chumper. Things like the class or ID given to the tables generated can be configured from that file. This is particularly useful if you want to use classes or IDs to style the datatables in a specific way. Other than that the default settings can be used for most cases.

Now that we have configured Chumper, we can now add a route that will return the page where the datatable is displayed in your routes.php file. In the example below, we have the a controller called AdminController and were using the data returned by the users method as a response whenever the users route is accessed via the GET method:

1
2
3
<?php
Route::get('users', 'AdminController@users');
?>

Next we also need to add the route that will return the data into the client side. By default, Chumper uses the server for processing queries made through the datatable. This means that it only gets the actual data that is needed instead of getting all of the records in the database table that you specify. In the code below, were giving a name of api.users to the api/users route so that we can refer to it later in the controller. The uses keyword allows you to specify a controller action to the route. Its basically the same thing as what we did above but that’s the way to do it if you’re using named routes.

1
2
3
<?php
Route::get('api/users', array('as' => 'api.users', 'uses' => 'AdminController@getUsersDataTable'));
?>

Under your controller, here’s the method that returns the page where the datatable is displayed:

1
2
3
4
5
6
7
8
9
10
11
<?php
public function users(){

    $table = Datatable::table()
      ->addColumn('Name', 'Last Login', 'View')
      ->setUrl(route('api.users'))
      ->noScript();

    $this->layout->content = View::make('admin.users', array('table' => $table));
}
?>

The code above assumes that you’re using Laravel layouts. If you don’t know how to use layouts in Laravel, be sure to check out the docs. Breaking the code down, the following code allows you create the datatable. You can add columns to it by using the addColumn method. This method takes up the names that you want to give to the header for each field in the table. The setUrl method allows you to set the route that the datatable will use for processing queries made through it. Earlier we created a route and named it api.users so in the setUrl method all we have to do is to use the route method and then supply the name of the route which is responsible for returning the data for processing the queries. Lastly, we call the noScript() method to specify that we don’t want to add the JavaScript code in the response that will be returned.

1
2
3
4
5
6
<?php
$table = Datatable::table()
  ->addColumn('Name', 'Last Login', 'View')
  ->setUrl(route('api.users'))
  ->noScript();
?>

Next is the method which processes the queries made through the datatable:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<?php
public function getUsersDataTable(){

    $query = User::select('name', 'active', 'last_login', 'id')->get();

    return Datatable::collection($query)
        ->addColumn('last_login', function($model){
            return date('M j, Y h:i A', strtotime($model->last_login));
        })
        ->addColumn('id', function($model){
            return '<a href="/users/' . $model->id . '">view</a>';
        })
        ->searchColumns('name', 'last_login')
        ->orderColumns('name', 'last_login')
        ->make();
}
?>

Breaking it down, the code below allows you to specify the fields that you want to use for the response. These are the actual field names in your database table:

1
2
3
<?php
$query = User::select('name', 'last_login', 'id')->get();
?>

Next, we return the actual data using the collection method in the Datatable class. Well, not actually the Datatable class, since its just the Facade that we used earlier in the app.php file. The collection method requires the result set returned by our query to the users table earlier so we just set that as the argument. After that, we can call the addColumn method to update the presentation of the data returned for that specific field. In the case of the last_login field, its stored in the database as a time stamp which looks like this: 2014-07-29 11:37:39. We don’t really want to present that to the user like that so we format it using the date method. The first argument is the format that you want. In this case we want something like this: Jul 29, 2014 11:37 AM. Looking at the official docs, we know that we can do that by specifying the following: M j, Y h:i A. The second argument is a unix timestamp. We can convert the raw data that came from the database into a unix timestamp by using the strtotime method, so we do just that. Next is the id field. We don’t actually want to display the users id to the user, what we want is to display a link that would lead the user to the page where more details for the user can be viewed. Thus we return an HTML anchor tag which uses the id as one of the component for the actual link.

1
2
3
4
5
6
7
8
9
<?php
return Datatable::collection($query)
    ->addColumn('last_login', function($model){
        return date('M j, Y h:i A', strtotime($model->last_login));
    })
    ->addColumn('id', function($model){
        return '<a href="/users/' . $model->id . '">view</a>';
    })
?>

Lastly, we can now display the datatable in our view. If you’re using Twitter Bootstrap, it should look similar to this one:

1
2
3
4
5
6
7
8
9
10
@section('content')

<div class="row">
  <div class="col-md-12">
  <h3>Users</h3>
  {{ $table->render() }}
  {{ $table->script() }}
  </div>
</div>
@stop

Yup! as simple as that! All we have to do is to call the render() method to render the actual datatabase. And then we also call the script() method to render the JavaScript file that would do the talking to the server every time the user interacts with the table.

Introduction to Contact Plugin for Octopress

| Comments

In this blog post I’ll be introducing the Contact plugin for Octopress. This plugin allows you to create contact forms with ease. Its using the pooleapp.com for saving the data for the forms that are submitted.

Create a Pooleapp account

First lets go through pooleapp. Poole is a free, hosted data store for static sites. It allows you to post data into it and then later on you can retrieve the data using a simple API.

You don’t have to register to start using pooleapp but its recommended so that you can keep track of the forms that you create. Another bonus feature is that when someones submits a data to your contact form, pooleapp will immediately notify you via email.

Once you’ve registered an account, you can now create a new form. Just give your form a unique name and click on the ‘create form’ button. Once created, pooleapp will ask you for the email in which you want the notifications to be sent to.

Installing the plugin

Octopress doesn’t really have a plugin system so we’ll have to do things manually. First thing that you need to do is to add the contact.rb file into the octopress/plugins directory.

So that we can show a success message once the visitor submits his data through the contact form, we also need to add the contact.js file inside the source/javascripts directory. Basically what it does is to check for the existence of the form query parameter. If it exists then it makes the success message visible.

For the styling, add the _contact.scss file inside the sass/partials directory. Then in your sass/_partials.scss file, import the css for the contact form by adding the following on the last line:

1
@import "partials/contact";

Lastly, under the source/_includes/custom directory, add a script tag that points out to the contact.js file on the last line:

1
<script src="/javascripts/contact.js"></script>

Using the plugin

To use the plugin in any of your pages, simply use the contact liquid tag then supply your pooleapp API key as the first argument, and the redirect URL for when the form is submitted:

1
{% contact YOUR_POOLE_APP_API_KEY http://YOURSITE.COM/PAGE?form=ok#alert-box %}

Demo

You can try out the demo on the about me page of this blog.

Setting Up SSL on Apache

| Comments

In this blog post I’ll walk you through setting up SSL on Apache. When talking about SSL the popular choice is OpenSSL, an open source toolkit for implementing Secure Sockets Layer (SSL) and Transport Layer Security (TLS). So we will be using OpenSSL for this tutorial.

Install OpenSSL

The first thing that you need to do is to determine the latest version of OpenSSL from the sources page. Its usually the one that has a red color. Once you find that, copy its address then use wget to download it to your preferred directory:

1
wget http://www.openssl.org/source/openssl-1.0.1h.tar.gz

Next create the directory where you want to install openssl:

1
mkdir /usr/local/openssl

Extract the archive:

1
tar -xvzf openssl-1.0.1h.tar.gz

Then cd into it:

1
cd openssl-1.0.1h

Next execute the config command to set the installation path for openssl and check for any errors. This should be the same as the directory you created earlier:

1
./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl

Next execute make to compile the source code. If this doesn’t work for you try adding sudo before the actual command. After make is done and there aren’t any errors you can now execute make install to install the source files in there appropriate directories.

Once that’s done you can verify that openssl is successfully installed by executing the following command:

1
/usr/local/openssl/bin/openssl version

Generate Keys

Once you’re done with installing openssl you can now assign it to a variable:

1
export OpenSSL_HOME=/usr/local/openssl

And then add it to your system path:

1
export PATH=$PATH:$OpenSSL_HOME/bin

Next create a private key:

1
openssl genrsa 2048 > privatekey.pem

In the above command genrsa 2048 tells openssl to generate an RSA key that is 2048 bits long. RSA is basically just an algorithm used for encryption.

Next create a CSR (Certificate Signing Request) using the private key that we have just generated:

1
openssl req -new -key privatekey.pem -out csr.pem

The command above will ask for the following:

  • Country Name – use the 2 letter abbreviation of your country name
  • State or Province – (e.g California)
  • Locality Name – (e.g Palm Desert)
  • Organization Name – name of your company
  • Organization Unit – name of website
  • Common Name – domain name of website (e.g mywebsite.com)
  • Email Address – your email address

The information above will be used for the certificate that will be assigned to you later on so be sure to supply the correct information.

Enable SSL on Apache

Now that we have generated all the keys we need we can now configure apache to use those keys. First you have to enable the SSL module by executing the following command:

1
sudo a2enmod ssl

Then restart apache for changes to take effect:

1
sudo service apache2 restart

Next edit the ssl configuration file for apache:

1
sudo nano /etc/apache2/sites-available/default-ssl.conf

Comment out the following lines by adding a pound (#) sign before them:

1
2
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key

Next look for the following line:

1
<VirtualHost _default_:443>

And then under it set the server information:

1
2
3
4
ServerAdmin admin@mywebsite.com
ServerName mywebsite.com
ServerAlias www.mywebsite.com
DocumentRoot /home/www

Next look for SSLEngine On and then under it add the following:

1
2
SSLCertificateFile /home/wern/signed-certificate.crt 
SSLCertificateKeyFile /home/wern/privatekey.pem 

The SSLCertificateFile is where you specify the path to your websites digital certificate. I didn’t cover this step because there are a lot of certificate authorities out there. So far I’ve only tried with Namecheap and its pretty easy to acquire a certificate from them. Just create an account and then log in to it. Once you’re logged in just click on the security menu then select SSL certificates. From there just click on the button under the domain validation, add your preferred certificate to the cart and then just go through the steps. Once you have purchased a certificate just hover over your user name on the upper left side of the screen and then select manage ssl certificates. That will bring you to the page where all your certificates are listed. By default its just sitting there waiting to be configured. So all you have to do is configure it then select Apache + OpenSSL when it asks for your server configuration. And then it will ask for the csr. Just copy the contents of the csr.pem file that we generated earlier and paste it on the textarea which is asking for it. After that just click on submit and go through the steps provided by namecheap. Once everything is ok namecheap will send you the certificate via email. Just copy it and then save it on your server. The path to that file is what you need to assign to the SSLCertificateFile in apache. Next is the SSLCertificateKeyFile that’s the path to your private key. In our case its the privatekey.pem file.

Once that’s done you just have to enable it:

1
sudo a2ensite default-ssl.conf

And then restart apache so that the changes will take effect:

1
sudo service apache2 restart

That’s it! Enjoy your new https enabled website. The next step would be to redirect all http request to https but I’ll leave that one to you.

How I Work With Clients

| Comments

In this post I’m going to share some of the things I usually do when working with my clients.

Getting Projects

First off I’m not actively looking for work as I already have a full time job so I usually let potential clients to contact me for projects. My contact details are in the about me page and they can just contact me via my primary email or skype. I have twitter but I usually do not entertain people who contact me there. There’s also linked in but most of my contacts are recruiters which is no good because they usually come at you for full time jobs at a physical office somewhere.

Now that you know how I get client work its time to proceed with the how. So first thing that happens is that I receive an email or a skype contact request with some project details in it. Something like:

Hey I read your blog post on {Some blog post I’ve written before} and I think you would be able to do this project. {An overview of the project}. Is this something you’re interested in doing for us?

Depending on my current work load and how interesting the project is I either decline or accept the project. If I still got a bit of time and the project is interesting I usually say yes. If its not something interesting and I got a lot of free time after work I say no. I don’t really like doing something I don’t enjoy just for the sake of some cash.

Once I decided to accept the project I send an email saying that I accept the project. Here’s a template that I usually go with:

Hi {first name of client},

Yes I’m interested in this project. However I currently have a full time job thus I won’t be able to work on this project full time. I can only do this after I’m done with my work or on some free time on weekends. If you’re ok with this then I’ll happily accept this project.

Regards,
Wern

As you can see above I always try to make it clear of my current occupation. Whether I currently have a job or another project that I’m working on. If the potential client is ok with it only then that the project begins.

Introductory Email

On the beginning of the project I usually send an email to introduce myself and some of the guidelines and process that I follow when working on a project. Something like this:

Hi {first name of client},

Thank you for understanding the situation. I can begin doing the project starting tomorrow. But first here are some guidelines that I follow when working on a project:

- First. All things that have something to do with the project should be added on Trello, a web-based project management software. I’ve already invited to it, please accept my invitation so you can familiarize yourself with it. If you have any questions, suggestions or clarifications regarding the project please add them on Trello.
- Second. If you need to talk to me you can contact me on Skype but first send an email that you want to talk to me and I’ll try to look for a good time to talk. Here is my skype user name: wernancheta
- Third. I may not always be available so please understand that I can’t always immediately reply to an email or a question on Trello.
- Fourth. I usually put a number of features into a group. Once a specific group is satisfactorily completed I ask for a payment.
- Fifth. Estimates cover up to 3 small revisions for each feature. Small revisions doesn’t take more than 10 minutes to do. Anything that will take longer than that I’ll have to charge an additional fee.

Regards,
Wern

This usually goes smoothly and the client says ok.

Trello Workflow

Next is the Trello workflow. What I do is stick with the following list:

  • To do – this is where I put in items that we have talked about with my client.
  • Wont do – items that we have decided not to do. The usual reasons are that the client no longer wants the feature or it will be postponed at a later time.
  • Doing – items that are from the To do list that I’m currently working on.
  • Done – items that I believe are already done. I usually manually test the items before I move them to this list. When there are issues with the items the client can just comment their issue into the specific item. Once I found out that its a real issue that needs to be worked on then I move the item back into the Doing list.
  • Proposals – this contains the features that I consider necessary which the client didn’t mention. Items from here gets moved to the To do list once I get the clients approval.
  • Other Info – anything else about the project that doesn’t belong to any of the above. Initially this is where I put a quick tutorial about how to use Trello.

On each of the list I put in a README card to guide the client what each list is for.

Trello is great for clients who loves asking for project progress every second. Upon looking at Trello they already have an idea what still needs to be done, what I’m currently working on and what else I have to do.

Development

When developing I usually push the files into Openshift because they offer free hosting up to 3 projects. Database is also covered so its really sweet considering the fact that its free. By using Openshift I can also ensure that my clients can’t just run away with the source code and call it a day. If I have already established a certain amount of trust with client and they have a server where I can put the source code then I use their server instead.

Payments

Lastly there’s the payments. I don’t receive payments up front. This is how I establish trust to the client. So if the client is not some kind of heartless villain who enjoys not paying for someone’s service I can usually expect them to pay. What I do is group the features that I’ll be working on into 2, 3 or 4 groups depending on the number of features. I usually arrive with 4 groups. This means that I’ll be asking the client for payment 4 times. Once the first group is satisfactorily done without issues I email my client. I go with the following template:

Hi {first name of client},

Here’s the break down for the {name of group}:

{List of features here}

Total: {total price}

You can pay in this email with paypal: {my paypal email address}

Regards,
Wern

That’s it! You might have noticed that I didn’t mention anything about contracts. That’s because I don’t do contracts. I believe contracts just gives you the power to sue someone and go to court. Because I usually work remotely I don’t think I can go to court if my client is on the other side of the world. So if they don’t pay I’ll just pray for their souls.

What I’ve Been Up to Lately

| Comments

You might have noticed that I no longer publish new blog posts as frequently as I have before. That is because I’ve been busy with other stuff lately. It all started when I joined Islick Media last March. My job at Islick Media is pretty much the same as a regular job where you work 8 hours a day, 5 days a week. Nothing out of the ordinary.

Then I got an unexpected project from someone who read my blog post on Amazon Product Advertising API. I was hesitant at first because I’m already happy with my job and I am happy with my salary. After some pondering I thought that extra income would be nice so I tried to give it a shot and I emailed the person back, making it clear that I currently have a full time job and that I would only be able to do this project on my free time. The person then replied back saying that its ok. Then the rest is history. I got the project last April and until now the project is still ongoing so most of my free time goes into that.

Going back to the month of February I also tried emailing Sitepoint, a company dedicated to making awesome articles on web development. It was pretty much a cold email saying that I wanted to write for them. That I’ve been writing articles about web development for a while but I’ve only been doing it on my blog and that I wanted to try and make money doing it. I’ve waited but I didn’t get a reply after a week so I thought they’re not interested. But then after exactly a month, the managing editor of Sitepoint PHP Channel emailed me back with an apology for not getting back to me sooner. But the important part is that I got an ok. And man! that was the most awesome feeling ever! Sitepoint is one of the most popular websites which publishes resources (books, articles, courses) on web development. The fact that I get to write for them is really just awesome.

Lastly, I’m also occupied with my personal project hoping that it would turn into a nice source of passive income. I can’t tell something about the project yet but once I get it out there I’ll be publishing a blog post about it so stay tuned for that.

And that’s pretty much what I’ve been up to lately. I don’t think I’ll be able to write anything lengthy on blog soon. But I’ll try publishing some short tutorials so I still have fresh content in my blog even if I’m busy. But basically the series on the Whirlwind tour on Web Developer Tools isn’t going to continue soon. I’d like to provide as much information as I could on each part of the series. But I don’t think I have time to write lengthy posts so I’m going to temporarily stop the series.

That’s it for this blog post. At times like this I really wish the Hyperbolic Time Chamber was for real so I don’t need to prioritize things and just do everything I want to do.

Things I Learned on My Third Job

| Comments

Its been 3 months since I joined Islick Media, a Web Development shop based in Palm Desert, California. Just like with my previous jobs I work for them as a remote worker. In this blog post I’ll be sharing some of the things I’ve learned on the job.

Synxis

Synxis is a reservation system. Its a pain in the neck to work with this one. Any code which has something to do with their reservation features are not accessible. At most you can only update the HTML for the header and footer part of the page. Uploading new files is also painful as you either have to install Java so you can run their image uploader or suck it up and upload files one by one.

Wordpress Theme Customization API

I’ve worked with the Wordpress Theme Customization API on my first project on the company. I’ve used it to give the users of the Wordpress theme that I’ve created a simple way of customizing the look and feel of the theme. Things like customizing the color of links, header and background images can go a long way in making your Wordpress theme easily customizable to non-programmers.

Zillow

Zillow is a home and real estate marketplace dedicated to helping homeowners, home buyers, sellers, renters, real estate agents, mortgage professionals, landlords and property managers find and share vital information about homes, real estate, mortgages and home improvement. I’ve used their API in providing zestimates (zillow estimates) for real properties.

Laravel

This is not the first time that I’ve learned about Laravel. Its some sort of a reacquaintance since I first used it in the year 2012 where it was only newly released. Fast-forward to 2014 there’s already a bunch of stuff that has changed and improved. Some of my previous knowledge were still of use but I also had to learn new stuff and new way of doing things. I’ve learned about the IoC container, and how to make use of external classes the laravel way. I also learned about the authentication class which makes writing the login functionality for your app a breeze.

Mailing Services

Mandrill and Mailgun are mailing services that I’ve used for sending out emails for my projects. Yes you can pretty much use the built-in mailing server on the server where your app is hosted. But the main advantage of using a mailing service over the built-in mailing server is authentication. With mailing services such as Mandrill or Mailgun you get the benefit of having your email come from a reputable server. This leads to a higher rate of the emails actually making it into your customers inbox and not the spam.

SPF and DKIM

SPF and DKIM is a way to authenticate mailing services such as Mandrill and Mailgun to send on behalf of your server. So you can get a cool looking email like: awesomeness@coolness.com to work and actually make it to your customers inbox.

Amazon EC2

Short for Amazon Elastic Compute Cloud. Its basically a cloud computing platform. You can use it to host web applications and scaling is already taken care of. You can start with a fairly low performance and storage capacity server instance. And once you’ve met a certain point where the current instance is no longer performing well. You can just upgrade your current instance and all your stuff would still be in there.

Stripe

Stripe is a company that provides a set of APIs for enabling businesses to accept and manage online payments. They have SDKs (Software Development Kit) available for different programming languages. Which is nice since no matter what programming language you’re using to write your app you can use the SDK to easily talk with the Stripe API. Stripe uses credit card for payments. One time payment and subscription based payments are automatically handled for you.

Twilio

Twilio is a cloud communications company. They allow developers to provide SMS and Voice functionality to websites. I used twilio in my second project (Vmonial) with the company. Vmonial is an app that allows businesses to accept voice testimonials from their clients. I used the Twilio Voice API on the project. You can basically control the flow using XML files (TwiML) which uses tags like <Say>, <Record>, <Play>, <Gather> and <Response>.

WHM

WHM is sort of the big mama of cpanel. This is where you can manage cpanel instances, users, third party extensions, and lots of other stuff for managing a server.

Elastic Search

Elastic search is an open source, distributed and RESTful search engine. Its like Apache Solr which I’ve written about a few times in this blog. Lots of people says really good things about Elastic Search that’s why I gave it a try on my third project (Roof99) to handle the search. MySQL was not a choice since its a database and it would be terribly slow for searching. Elastic search on the other hand is a search index. Documents are stored in JSON format and querying can be done by using REST (Representational state transfer) calls.

Prediction IO

Prediction IO is an open source machine learning server. You can use it for creating personalized applications. With Prediction IO you can predict your users behavior, offer personalized content (E.g news, ads, jobs), help them discover things that they might like. All of this can be done by having the server silently record the users activity within your app such as viewing, liking, disliking, and rating something.

Phonegap / Cordova

Phonegap allows developers for creating mobile apps using web technologies (HTML, CSS, JavaScript). Installing stuff for compiling those HTML, CSS, and JavaScript files is really a pain. Sometimes you get an error that takes hours to solve. Thankfully there’s the Phonegap build service by Adobe that allows you to upload your source files and then after a second or two you can readily download the app installers for devices that you support. This is pretty neat since all you have to do is to write HTML, CSS, and JavaScript code like you always do, upload it to Phonegap build and boom! you now have an installer for every mobile app that you support. A QR code is also generated every time you update the source code of your app. You can then just use your phone or tablet’s QR code reader and it will directly download the installer provided you’re connected to the internet. There’s also hydration which allows you to easily update already installed apps. So if you upload a new version of your app on Phonegap build, and then you open up the app on the mobile device hydration will detect the updates and then it will ask you to update the app or not. So no more need to re-install the app every time a new version is uploaded. Lastly there’s also debugging tools provided that allows you to debug the current instance of the app on your mobile device from the browser. This is all really sweet and awesome but we still need to think about performance, app permissions, and writing the code in such a way that it will be easily maintainable. There’s also this mobile development mindset that you have to get into. What I’m saying is that you shouldn’t really write Phonegap apps the way you write web applications. Because the environment is different. In a browser environment clicking on the link will load up a new page but in an app what it will do is open up the browser and then navigate to that link. So basically most of the things that you need to perform in the server side will have to be done using AJAX requests. Updating the UI can be done by using templates and so on.

That’s it! for now. In the coming months I’ll be updating this post and share some more of the things I’ve learned on my current job.

A Whirlwind Tour of Web Developer Tools: Build Tools

| Comments

In this part seven of this series I’m going to walk you through build tools. As usual I’m going to summon a Wikipedia page to do the definition for me because I really suck at defining things:

Build automation is the act of scripting or automating a wide variety of tasks that software developers do in their day-to-day activities including things like compiling computer source code into binary code, packaging binary code, running tests, deployment to production systems, creating documentation and/or release notes

In other words build tools makes developers life easier by automating mundane tasks. In the web development world we commonly use build tools to lint, test, minify and deploy source code.

A Whirlwind Tour of Web Developer Tools: Source Control

| Comments

This is part seven of the series A Whirlwind Tour of Web Developer Tools. In this part I’m going to walk you through source control. Source control is also known as Version Control or Revision Control. Which ever term you have heard of before it means the same thing. Like I did with the previous parts of this series I bring you the definition of source control from Wikipedia since they really do a great job at defining things:

Revision control, also known as version control and source control (and an aspect of software configuration management), is the management of changes to documents, computer programs, large web sites, and other collections of information. Changes are usually identified by a number or letter code, termed the “revision number”, “revision level”, or simply “revision”. For example, an initial set of files is “revision 1”. When the first change is made, the resulting set is “revision 2”, and so on. Each revision is associated with a timestamp and the person making the change. Revisions can be compared, restored, and with some types of files, merged.

In simple terms version control is a way in which we can manage changes to a specific document. In the context of Web Development, the documents that we need to manage are the source files of the websites or web applications that we are building. Things like html files, stylesheets, script files, images, and other assets.

A Whirlwind Tour of Web Developer Tools: Package Managers

| Comments

In this part of the series I’ll walk you through package managers. I believe the definition available at Wikipedia really gives a good overview on what package managers are:

In software, a package management system, also called package manager, is a collection of software tools to automate the process of installing, upgrading, configuring, and removing software packages for a computer’s operating system in a consistent manner. It typically maintains a database of software dependencies and version information to prevent software mismatches and missing prerequisites.

In simple terms package managers make it easy to install and modify software. In this blog post we’ll be walking through some of the package managers available for Linux, Mac and Windows. And also package managers for easily installing front-end dependencies like jQuery or Twitter Bootstrap.