Wern Ancheta

Adventures in Web Development.

What Playing Clash of Clans Can Teach You About Life

| Comments

  1. You need to prioritize. There’s only like 2 or 3 builder huts when you start playing the game. That’s why it’s important to prioritize which things you upgrade. This is because upgrading takes time. And cancelling an upgrade will only return you half of the original resources you spent on the upgrade. This usually means there’s no turning back after you’ve clicked on that upgrade button. Just like in life you have to prioritize. You can’t just aimlessly be doing everything that seems interesting.

  2. You need the right strategy for every raid that you do. Not every base is the same. There are those whose traps are laid outside the walls. This usually means that it will be triggered the moment your troops lay their first step to loot the resources. There are traps that are laid right before key defenses such as the mortar or air defense. Just like in life you need to carefully plan out your every move, especially the most important one’s such as career decisions.

  3. You can’t protect everything. Put your town hall outside in order to protect what’s really important. Some players might disagree to this and say that if you have designed your base well then that’s the ultimate defense that keeps other players from attacking your village. Some players might also be prioritizing trophies instead of loots and that’s why they keep their town halls at the very center of their village. Those are all valid arguments. Some players might value trophies more than the loots. But come to think of it. There’s over 6 million people who have it installed on Android. I don’t know about iOS but 6 million people is a lot. This means that no matter how strong you think your village is. There’s always someone who can 3-star it effortlessly and take away a ton of loot. So it’s important to realize that you can’t protect everything. You have to prioritize what you really want to protect and design your village in such a way that what you want to protect is protected in the best possible way. Just like in life you can’t get everything that you want. It’s important to love what you currently have.

  4. There’s always someone out there whose stronger than you. There’s no need to feel bad whenever some other 3-starred your village. Just like in life you don’t need to feel bad about yourself whenever you see someone who is effortlessly better than you at the thing you’re good at. What’s important is that you work hard to get better each day.

  5. Revenge is for the weak. I don’t know for other players but I pretty much don’t care about other players attacking my village. It doesn’t really matter whether they 3-starred me or they took a whole bunch of loot. They’re just playing the game just like me. It’s natural for your village to get attacked. It’s natural for other players to take some resources out of your village. Just like it’s natural for you take others as well. But maybe it’s just me. I heard you would get a lot more trophies if you revenge on someone. But trophies aren’t really important to me. I pretty much gave up on trying to go higher when I reached the crystal league. I tried going higher in order to get the gems as the reward but it’s just hard to look for town halls that are unprotected.

  6. Time is money but you can also use money to buy time. When I gave up on reaching the master league in order to get the gem reward for reaching it. And eventually buy the 5th builder with it. I just bought some gems with some real money in order to buy the 5th builder. I don’t really regret it since I just saved myself a lot of time trying to find unprotected town halls and get a measely amount of trophy. This is an example of buying time with money. With the 5th builder upgrades would be faster. And I no longer need to aim to get the gem reward for entering the master, champion, titan or legend leagues.

  7. Patience. Upgrades takes time, creating troops and spells takes time. Gold mines, elixir collectors, heck dark elixir drills takes time. Heroes takes time to sleep. Cool-down for requesting clan castle troops takes time. Searching for a good village to raid takes time. Patience is really a virtue. Especially so when it comes to playing clash of clans. Where every move you do takes time.

  8. Collaboration. It’s important to collaborate with your clan mates during a war. You can’t always have a mirror that directly matches yours. Often times your mirror is far stronger than you are. Sometimes the town hall doesn’t even match. That’s why it’s important to collaborate with your clan mates so that you can adjust accordingly. Just like in life it’s important to collaborate with your fellow employees, with your community and with your family.

  9. Progress will come naturally if you just stick to it. When I first started I really envied my friends on facebook because their villages looks so strong and tough and mine looks really week. But 6 months later I saw that my village is pretty much already on par with them or even stronger. I’m not addicted to the game though. I really only play on my free time. But I always stick to it every day. And that is why I can see that I’ve come a long way since I first started. This is similar to anything that you want to achieve in life. As a developer I always see to it that I learn something new each day. Or have a better understanding of what I previously learned. Just stick to doing something each day and progress will come naturally.

  10. Even heroes needs sleep. So do you. Sleep is needed in order to recharge your mind and body. The brain needs sleep in order to consolidate the things you’ve learned during the day. So even if you think you’re a superhero who only needs 3 hours of sleep in order to get by. You’re not. You can’t really hack your way out of sleep. That’s what makes us human. We need rest in order to recharge.

  11. Donate and you will be rewarded. In clash of clans there is this donation system wherein a clan member can ask for troops to be put in the clan castle. Those troops will serve as guards for your village. If you always donate troops whenever someone asks for it, you will be remembered and your clan mates would love to donate troops to you as well. The same is true in life as well. Always donate to the less fortunate people and you will be rewarded.

Introduction to Naughtyfire

| Comments

Welcome to yet another introduction to a side-project of mine. This time I’d like to talk about Naughtyfire.

Naughtyfire allows remote (home-based) developers to automatically notify their boss that they are taking a day off for a specific date. Settings, clients and events can be set in the web interface. All you need to do is to put it in the www or public_html directory in your home folder.

Notifications are done via email through a mail provider of your choice. I’ve only tested Mandrill configuration but it should work with other mail service providers supported by SwiftMailer.

This project is meant to be used locally on your own computer. But you can also upload it on a server and set it up from there.

Dependencies

  • Apache
  • MySQL
  • Composer
  • SwiftMailer
  • Phinx

How to Use

  • Put it on your www or public_html directory
  • If you have installed it on a folder under your www or public_html directory, you have to setup a virtual host for that path and then use it when accessing it from the browser. Accessing it via http://localhost/naughtyfire wouldn’t work since the assets are linked using an absolute path. I recommend seetting the host name to naughtyfire.dev.
  • Navigate to the naughtyfire root directory then install the libraries by executing composer install on your terminal.
  • Create the database that naughtyfire will use.
  • Create a phinx.yml file and update the database credentials. You only have to update the values under the development configuration. Specifically the name, user and pass.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
paths:
    migrations: %%PHINX_CONFIG_DIR%%/migrations

environments:
    default_migration_table: phinxlog
    default_database: development
    production:
        adapter: mysql
        host: localhost
        name: production_db
        user: root
        pass: ''
        port: 3306
        charset: utf8

    development:
        adapter: mysql
        host: localhost
        name: naughtyfire
        user: root
        pass: ''
        port: 3306
        charset: utf8

    testing:
        adapter: mysql
        host: localhost
        name: testing_db
        user: root
        pass: ''
        port: 3306
        charset: utf8
  • Access the host that you have selected (e.g. naughtyfire.dev) from your browser. The default page is the page for creating new events. But you can also access the following pages.

    • /settings – for updating the settings. Here you can set the twilio credentials and mail settings. If you leave the twilio credentials or mail settings blank, the notification wouldn’t work. Choose either one of those or both and then supply the correct values.
    • /recepients – for listing current recepients.
    • /recepients/new – for creating a new recepient.
  • Open cron.

1
crontab -e

Use wget to request for the notifer URL once a day.

1
0 0 * * * wget -O - http://naughtyfire.dev/notify >/dev/null 2>&1

Here are a couple of screenshots of the app.

new event

mail sample

You can find more details about this project on its project page.

If you have an idea or you want to contribute to this project, feel free to check it out, fork it, or create an issue in its github repo.

My OSX Development Environment

| Comments

I’ve had my fair share of Windows and Linux development. This time I’d like to share with you the development tools I use in my OSX machine.

Homebrew

A must have for every Mac Developer, the missing package manager for OS X. If you came from Ubuntu, this is basically the same as apt-get. You can install it using ruby. Ruby already comes pre-installed in mac so you can execute the following command directly.

1
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Once that’s done you can now install every developer that you can think of by using brew install. Packages in Homebrew are called formulas, you can find them on homebrewformulas.org or in the Homebrew repository at Github.

MAMP

MAMP is the equivalent of the LAMP stack on Mac. It basically stands for Mac Apache MySQL PHP. You can install it by downloading the installer from the MAMP downloads page. These days I only use A of MAMP for testing scripts. Since I usually develop PHP applications using the Laravel framework which can already serve your project while you’re developing. This basically takes care of the A and P part of MAMP.

Sublime Text

I’m not a big fan of IDE’s (Integrated Development Environment) so I use a plain old text editor when writing code. My text editor of choice is still Sublime Text mainly due to how its simplicity and good performance. It just allows me to code without having to think too much about how I can use the different bells and whistles. You can download the installer from the downloads page.

Node

Node has become quite the dependency for every thing. Development tools such as Gulp and Sass requires Node.js to be installed. Well not exactly Node.js but NPM. But why would you want to install Node without NPM? They’re like inseparable twin brothers. So better install Node which already comes with NPM since sooner or later you would also need Node anyway. You can install Node by using the Node Version Manager (NVM). You can download it by using curl and then piping it to the shell:

1
curl https://raw.github.com/creationix/nvm/master/install.sh | sh

Once that’s done, you can list the node versions that are currently available.

1
nvm ls-remote

From there you can install the version that you want.

1
nvm install v0.12.7

And then finally set it as default.

1
2
nvm alias default v0.12.7
nvm use default

Git

Git is my version control system of choice. You can install it via homebrew.

1
brew install git

Don’t forget to set your global config after installing:

1
2
git config --global user.name "Wern Ancheta"
git config --global user.email "myemail@gmail.com"

You can also set the credential helper to use osxkeychain. This allows you to cache your password so that you don’t need to enter it every time you push to the server.

1
git config --global credential.helper osxkeychain

Xcode

I needed Xcode to compile and build the Cordova app that I was developing. Xcode comes with an iOS simulator which is great for testing Cordova apps to be deployed on an iOS device. You can install it using the following command:

1
xcode-select --install

If you don’t want to deal with the command-line, you can simply get it from the apple developer downloads page and look for the most recent version of Xcode. Be warned that Xcode is 2.6GB in size so it might take a while to download depending on your download speed.

Cordova

Cordova is a set of device APIs that allows mobile app developers to access native device functions such as the camera through JavaScript. You can install it by using npm. Cordova basically does all the heavy-lifting when it comes to developing Hybrid Mobile Apps.

1
npm install -g cordova

Ionic

Ionic is my Hybrid Mobile App Framework of choice. You can also install it with npm.

1
npm install -g ionic

Introduction to Staticizer

| Comments

Welcome to yet another promotional post on another side-project of mine. This time it’s Staticizer, a static site generator.

Yet Another Static Site Generator?

No. This isn’t unlike any static site generator out there. I created this project to create a static version for my antares project. So you can use this to create a static version of an existing project. All it really does is request a URL in your local machine and create an HTML file out of it. This works best for websites that has only a few pages.

How to Use

First you have to update the index.php file and write your own code for fetching the pages in the website you want to convert to a static one.

Next, update the config.php file and change the values for the BASE_URL, STATIC_PATH and JSON_PATH. The BASE_URL is the base URL of the website you want to convert. The STATIC_PATH is the base directory where you want to save the generated HTML files. The JSON_PATH is the path to the json files inside the STATIC_PATH that you specified. This is optional. Only use this if you are serving json files on the original website.

1
2
3
4
5
<?php
define('BASE_URL', 'http://antaresapp.dev/');
define('STATIC_PATH', 'site');
define('JSON_PATH', 'json');
?>

Note that you have to manually copy front-end assets (css, script, images) in the STATIC_PATH.

Deployment

This works best with Github pages. Just create a new Github account that directly matches the name of the website. As an example, I created a Github account and named it antaresapp. I then created a new repository named antaresapp.github.io. This will serve as the repository that the Github page will use. Remember that you can only create a single Github page for every Github account.

On your static path, initialize a new Git repo and add the Github page repository as a remote.

Lastly, you can use this project by executing the following commands from your terminal.

To update the database:

1
php update.php

To generate the static HTML files and JSON files:

1
php generate.php

To push the changes to the Github repo:

1
php push.php

If you want to know more about the project, you can check it out here.

Implementing Video Calls With PeerJS

| Comments

Picking up from where we left off last time. Let’s now try to add a video on our simple calling app with PeerJS. If you haven’t read my previous tutorial, go ahead and read it as this article wouldn’t make sense if you haven’t yet.

First, we still need to use the same scripts we used on the last tutorial.

1
2
3
<script src="//cdn.peerjs.com/0.3/peer.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script src="//www.WebRTC-Experiment.com/RecordRTC.js"></script>

But for our HTML, we need to replace the audio element with video. We also set the video to autoplay so that as soon as the stream becomes available, the video starts playing.

1
2
<button id="start-call">start call</button>
<video controls autoplay></video>

For our custom script, we still have the getParameterByName function.

1
2
3
4
5
6
function getParameterByName(name){
    name = name.replace(/[\[]/, "\\[").replace(/[\]]/, "\\]");
    var regex = new RegExp("[\\?&]" + name + "=([^&#]*)"),
        results = regex.exec(location.search);
    return results === null ? "" : decodeURIComponent(results[1].replace(/\+/g, " "));
}

As for the getAudio function that we previously used for getting the audio input from the users device. We now replace it with getVideo:

1
2
3
function getVideo(successCallback, errorCallback){
    navigator.getUserMedia({audio: true, video: true}, successCallback, errorCallback);
}

When the call is received, we now call the getVideo function instead of getAudio.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
function onReceiveCall(call){

    console.log('peer is calling...');
    console.log(call);

    getVideo(
        function(MediaStream){
            call.answer(MediaStream);
            console.log('answering call started...');
        },
        function(err){
            console.log('an error occured while getting the video');
            console.log(err);
        }
    );

    call.on('stream', onReceiveStream);
}

Once a stream is received, we also need to replace the element that we’re selecting. So we now select the video element instead of audio.

1
2
3
4
5
6
7
8
function onReceiveStream(stream){
    var video = document.querySelector('video');
    video.src = window.URL.createObjectURL(stream);
    video.onloadedmetadata = function(){
        console.log('loaded');
    };

}

The code for getting the current user and the peer is also the same.

1
2
var from = getParameterByName('from');
var to = getParameterByName('to');

But for the creation of the peer, we now use the PeerServer Cloud service instead of our own since we already did that last time.

1
var peer = new Peer(from, {key: 'Your PeerJS API Key'});

Then we listen for the open event on the peer just so we can check if the peer has actually been created.

1
2
3
peer.on('open', function(id){
    console.log('My peer ID is: ' + id);
});

We also listen to the call event so we can receive incoming calls.

1
peer.on('call', onReceiveCall);

For the start call button click event, we use the getVideo function and proceed as usual.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$('#start-call').click(function(){

    console.log('starting call...');

    getVideo(
        function(MediaStream){

            console.log('now calling ' + to);
            var call = peer.call(to, MediaStream);
            call.on('stream', onReceiveStream);
        },
        function(err){
            console.log('an error occured while getting the video');
            console.log(err);
        }
    );

});

Conclusion

That’s it! We have implemented video calling using peerJS. Do note that this will consume more bandwidth than audio calls so the performance might be affected depending on the network.

Things I Learned While Writing for Sitepoint

| Comments

It’s been more than a year since I started writing articles for Sitepoint. For those who don’t know, Sitepoint is a provider of awesome content for web professionals. Anything web which you can think of, they have it. They have tutorials on HTML & CSS, JavaScript, PHP, Ruby, Mobile, Design & UX, Wordpress and even for web entrepreneurs. And they’ve been doing it since the year 2000, I believe. Going back to the main topic of this article, the things I learned while writing for Sitepoint. There’s a lot that I’ve learned especially on my writing skills. When I first started, I thought my grammar was already perfect. But I’ve never been so wrong. Here’s a list of things that I wish I knew when I first started:

  • When using PHP libraries, always install it via Composer whenever possible.
  • When installing a single PHP library, it should be done via the command line using the composer require instead of adding the configuration in the composer.json file. Here’s an example when installing the guzzle http library:
1
composer require guzzle/guzzle

If you’re using packagist, you can easily install the package by using the command they have provided if you open a link to a specific library.

  • Sharing files that are used in articles (SQL files, project files) should be done with Github if it’s a whole project or Gist if it’s just a single file. I made a mistake of uploading it to the public folder in my Dropbox before.

  • Use shorthand echo when outputting something with PHP. So instead of using <?php echo 'hello world!'; ?>, it should be done with <?= 'hello world!' ?>.

  • Always use a framework when the examples gets too big so that the readers can easily try out the demo.

  • When using a framework, exercise separation of concerns. All routes should be in the routes file, and the routes file shouldn’t contain anything else. I made a mistake of using a closure in the routes to respond to HTTP requests. This shouldn’t be. Best practices should always be used even the code isn’t used for a real project. So the routes should use a controller which will return the view or execute a specific function.

  • When making HTTP requests, such as when the article talks about a specific API. Guzzle should be used or some other HTTP library instead of Curl. Some readers might not have Curl installed.

  • In every article, the convenience of the readers should always be the priority. This means that it should be easy to read. If the article includes a sample project, it should be hosted using Github or Bitbucket. Some authors prefer having a single repository for each article. But for me I prefer having everything inside a single repository. This is because the projects or sample codes that I host in there aren’t really updated that much. I think there’s no point having each one in it’s own repository. My main purpose in hosting with Github is to give the readers a place to examine the code with syntax highlighting and the way each files relates to all the other files. So that they can easily setup a demo which they could play on in there local machine.

  • Write your article as if the reader is a beginner. Don’t make assumptions on the skills of the reader. But this doesn’t mean to say that you have to walk the reader through the installation of PHP or talk about the basics when you’re writing an article about a specific API that uses PHP to make HTTP requests. Every PHP developer would already know that. In the first place, the reader shouldn’t be reading your article while not knowing anything about PHP. There’s always a minimum amount of requisite knowledge. Another example is when telling a reader to install a specific library using Composer. Not all PHP developer knows about Composer. I can’t point you out to a statistic but always assume that there’s someone out there who still installs libraries using Pear or zip files. In those cases you don’t have to walk the reader through how to install Composer. Simply pointing the website out or linking to the page which shows how to install Composer should suffice.

  • Always try to include a demo as a supplement to the article. This is not something I’ve personally done. Because most of the articles I write is about PHP, which runs on the server. With client-side articles (HTML, CSS and JavaScript) this is easy since there’s Codepen, jsFiddle, jsbin, and many others which allows you to easily create a demo which the reader can use to have an idea what the output would be like.

  • Always give some time for the title of the article and the introduction. These are really important, this is what the readers sees the first when they come across your article in social media sites like Twitter. It’s the first selling point of the article so it’s important that it’s catchy.

  • Include screenshots to supplement a specific instruction or to show the readers the output.

  • Don’t just paste big blocks of code and explain it in a really long paragraph. Break down the block into parts and explain each part. Then you can paste the big block of code so the reader sees how it all comes together. Often times I do the alternate, so I paste the big block of code first and do a summary of what it does and then I break it down into multiple parts.

  • Always participate in the comments. It’s not just about writing the article, and having it published. If readers comments on your article or asks a question, you should try to answer the best way possible even if you don’t know the answer. Even if it’s not a direct question or it’s just an opinion by the reader. You should try to participate and include your own opinions as well. Honestly this is a part that I need to improve on. I don’t always participate in the comments.

  • Common grammatical errors. The common one’s for me were the use of were vs. we’re, its vs. it’s. Everyday vs. every day. And where to place the comma or if it’s even needed. I think I’ve improved when it comes to this. But it’s always nice to have a second pair of eyes looking at your work. For this I use the hemmingway editor. It grades the readability of an article, marks potential errors, and provides some really good tips about your article.

  • Use a bullet list instead of saying ‘next’ or ‘and then’ all the time. If a bullet list doesn’t feel right, connect sentences with commas.

  • Property casing. Use all-caps when referring to an acronym. One of those acronyms is ID. It should be ID instead of id.

  • Needless words should always be ommitted. Common offenders include the words: ‘always’, ‘just’, ‘basically’, and ‘simply’.

  • Be consistent with the use of ‘we’ or ‘you’ when referring to the reader. You will often see these 2 words in tutorials. But if you have started using ‘you’ to refer to the reader or ‘we’ if you’re a merry person who wants to include yourself while telling something to the reader. It’s important that you stick with whatever you started using. I prefer to use ‘we’ in most cases since ‘you’ sounds really lonely. Whereas if you use ‘we’, it has the connotation that you have gone through the same process that the reader is going through when you were writing the article.

  • Proofread your article 3 or more times to ensure that common grammatical errors were caught and the wording is easy in the eyes or comfortable to read. This means that the article should be readable without having to exert much mental effort or having to go back to a sentence you’ve just read because it didn’t make sense.

  • When referring to a specific library such as jQuery, always be mindful of how it’s written on the website of that specific library. For jQuery, the ‘j’ is a small letter and the ‘Q’ is a capital letter.

  • Always be mindful of the word count. If an article is meant to be a series then each part should have a word count of not greater than 3000 words.

  • Always strive to make the work of the editor easier so that they will be more motivated to review your work.

  • Recently, Sitepoint implemented the peer reviews which utilizes Github. How this works is that all the articles are stored in a Github repository. Every new article is a separate branch that’s going to be merged in the main branch. A pull request is created for each new article which is then reviewed by the other authors. The other authors will comment on your work or make the changes on their end. The original author can then make use of these comments to improve the original article. This kind of workflow has levelled up my Git skills. And through the help of the other authors, I’ve learned how to improve my articles by altering the wording, providing screenshots and using Frameworks when presenting code. The next step that I’m looking into is to also review the works of other authors. As a means of giving back and learning how the other authors construct their articles as well.

That’s it! I won’t treat this section as the conclusion as there will always be new things to learn. I’ll update this article in the future once I learn some more. Be sure to check out the resources below if you also want to level up your writing skills. And if you’re a web professional, you’re welcome to join Sitepoint. They’re always looking for new authors. It doesn’t matter if you’re new to the industry or an experienced one. As long as you have something to share, you’re welcome to write for Sitepoint. Oh and articles are paid really well so it’s worth the time investment.

Resources

Getting Started With Amazon Cloudfront

| Comments

When developing websites, it’s important to deliver front-end assets as fast as possible to the client. One tool that web developers use is the Content Delivery Network (CDN). Which is basically a way of distributing front-end assets (scripts, stylesheets and images) on servers across the globe so that the files will have to travel less distance. This works by having the nearest server deliver the file to the client. Nicholas Zakas has written a really good article on how content delivery networks work. You can check that out if you want to dive deeper. In this article we’re going to take a look at Amazon Cloudfront, which is the content delivery network offered by Amazon Web Services.

Setting Up a New Distribution

Amazon Cloudfront utilizes the files from your S3 bucket. First thing that you need to do is to go to the Amazon Web Services console, select Cloudfront from the list of services, select create distribution, then click on the ‘Get Started’ button under the Web section.

getting started

Once you’re redirected to the next page, you will be greeted by a form where you enter the details of your new distribution.

distribution details

Each distribution uses a specific S3 bucket and you can pick that on Origin Domain Name. It would look something like app-name.s3.amazonaws.com. Once you have selected the Origin Domain Name, the Origin ID will automatically get filled up. You can click on the help icon on each field to get information on what they are. Knowing that, you can just leave the optional fields as blank and stick with the default values. Once you’re done filling out the form, you can click on the ‘Create Distribution’ button. After creation, it will be listed as the top item in your list of distributions. Your new distribution won’t be immediately useable. You can see it from the status field in the table. Right after creation, its status would be ‘In Progress’. I’m not really sure what goes on behind the scenes during this time, but I assume it’s distributing all the files that is stored on the S3 bucket that you selected across different servers around the globe. Once your new distribution is ready, you can now use the domain name assigned to your distribution as the domain name when linking your files. Do note that files distributed using Cloudfront should be invalidated every time you make a change to them. So it’s not recommended to use Cloudfront when you’re still developing your app. As you frequently have to invalidate the files as you make changes to your code.

Invalidating Files

You will need to invalidate files when you make changes to a file in your S3 bucket. The changes won’t take effect in the distribution that’s why you need to invalidate. To do that, click the distribution on the list. Once in there, click the invalidations tab, click ‘create invalidation’ and enter the path of the file you want to invalidate. The path is relative to the root of your bucket. So if your bucket is named bookr and your file is at /uploads/users/image/image-001.jpg then use that as the path. Do note that invalidating a file can take a while so use it sparingly.

Conclusion

That’s it! In this tutorial, you have learned how to use Amazon’s Cloudfront as a solution for your CDN needs. It’s really easy to get setup if you’re already using S3 to serve your front-end assets.

Best Anime of All Time

| Comments

I decided to give my blog 3 weeks break so I could make time for the 200 other things that I want to do. But then I said “fuck it”. Its not just programming stuff that I can publish here on this blog. Its my personal blog after all. I can always publish some other stuff that won’t take much of my time to write. So this time I decided to disguise the list of the best anime of all time as an actual blog post. But of course, this is all just my opinion. We all have different taste so don’t take my word for it. Try watching 2 or 3 episodes and see for yourself. Ok here goes:

  • Psycho Pass
  • Code Geass
  • Samurai Champloo
  • Anohana
  • Guilty Crown
  • Xam’d: Lost Memories
  • Parasyte the Maxim
  • Durarara!!
  • Eden of the East
  • Darker than Black
  • Full Metal Alchemist: Brotherhood
  • Steins;Gate
    1. Gray Man
  • Hunter X Hunter
  • The Melancholy of Haruhi Suzumiya
  • K-on
  • Hajime no Ippo
  • Katanagatari
  • Tengen Toppa Gurren Lagan
  • Kill la Kill
  • Haikyuu!!
  • Kuroko no Basket
  • Gatchaman Crowds
  • Tsuritama
  • Death Note
  • Yu Yu Hakusho
  • Attack on Titan
  • Avatar: The Last Airbender
  • Avatar: The Legend of Korra
  • Mirai Nikki
  • Toradora!
  • Kaichou wa Maid-sama!
  • Medaka Box
  • Accel World
  • Deadman Wonderland
  • Magi
  • Shaman King
  • Baccano!
  • Sket Dance
  • Akame Ga Kill
  • Nanatsu no Taizai
  • Slam Dunk
  • Assasination Classroom
  • Oregairu
  • Shokugeki no Soma
  • Hitsugi no Chaika
  • One Week Friends
  • Kakumeiki Valvrave
  • Yowamushi Pedal
  • Hamatora
  • Zankyou no Terror
  • Bakuman
  • Usagi Drop
  • Hanasaku Iroha
  • Tiger & Bunny
  • A-Channel

That’s all I can think of for now. I really have a bad memory so even if I’ve watched a really really good anime, it might not have made it in this list.

Quick Tip: How to Add Custom Pages in Wordpress

| Comments

In this quick-tip I’ll be showing you the easiest and quickest way to create custom pages under a specific theme in Wordpress. When I say custom, its a page where you can put anything you want using HTML, CSS, JavaScript and PHP Code. The page would also have access to the various APIs that Wordpress provides.

To start, create a new file under your theme folder. In this case I’ll be creating a custom-page.php file under the wp-content/themes/twentyfifteen directory of my Wordpress installation. Then add the following code in the file:

1
2
3
4
5
6
<?php
/*
Template Name: My Awesome Custom Page
*/
?>
<h1>This is my awesome custom page</h1>

Yes, that’s all there is to it. Note that the Template Name: part is very important. You can assign any value that you want as long as its descriptive. This specific comment is used by Wordpress to recognize your file.

To assign this page to a specific Wordpress page. You can add a new page from Wordpress admin page and select the page that we have created under the Template drop-down:

custom wordpress page

Now when you access the page from your browser, you will get that awesome heading. From your custom page you can also use the methods available on all the Wordpress APIs and also the built-in theme functions such as the get_header and get_footer.

Getting Started With Amazon S3

| Comments

Amazon S3 is Amazon’s file storage service. It allows users to upload their files to their server, for later access or for sharing to other people. In this tutorial I’m going to walk you through how to use amazon s3 within your PHP applications.

First thing that you need to do is create a composer.json file and add the following:

1
2
3
4
5
{
    "require": {
        "aws/aws-sdk-php": "2.7.*@dev"
    }
}

Next execute composer install from your terminal to install the Amazon Web Service SDK.

Once the installation is done you can now create a tester.php file which we will use for interacting with the Amazon AWS API. Add the following code to the file:

1
2
3
4
5
6
<?php
require 'vendor/autoload.php';

use Aws\S3\Exception\S3Exception;
use Aws\Common\Aws;
?>

What the code above does is include the autoload file so that we can use the AWS SDK from our file. Next we set to use the Aws\S3\Exception\S3Exception and Aws\Common\Aws namespace so can access the different classes that are available in those namespaces. One of which classes is the Aws class which we can use to set the configuration options for the Bucket where we are trying to connect to. All we have to do is call the factory method and pass in the path to the configuration file:

1
2
3
<?php
$aws = Aws::factory('config.php');
?>

The configuration file contains the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<?php
return array(
    'includes' => array('_aws'),
    'services' => array(
        'default_settings' => array(
            'params' => array(
                'credentials' => array(
                    'key'    => 'YOUR_AWS_API_KEY',
                    'secret' => 'YOUR_AWS_API_SECRET',
                ),
                'region' => 'YOUR_BUCKET_REGION'
            )
        )
    )
);
?>

The configuration file basically just returns an array that contains the options that we need. First of those is the includes, which allows us to bootstrap the configuration file with AWS specific features. Next is the services where we specify the API credentials and region.

Uploading Files

Once that’s done we can now upload files to the s3 bucket of your choice by using the $aws object and calling the get method. This method takes up the name of the AWS service you want use. In this case were using s3 so we put in s3. Next we call the putObject method on the $s3 object and pass in the required parameters as an array. The required keys are Bucket, Key, Body and ACL. Bucket is the name of the bucket where you want to upload the file. Key is the path to the file. With s3 you don’t have to worry if the directory where you are uploading the file already exists. No matter how deep it is, s3 automatically creates the directories for you. Next is the Body which takes up the results of the fopen method call. This method takes up the path to the file in your local computer and the operation you want to perform. In this case we just want to read the file contents so we specify r. Next is the ACL or the Access Control List of an object. Its basically like a file permission. Here we specified public-read which means that the file can be read publically. For more information about ACL, you can check out this page. We wrap all of those code inside a try catch so we can handle errors gracefully.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<?php
$s3 = $aws->get('s3');

try{
    $s3->putObject(array(
        'Bucket' => 'NAME_OF_BUCKET',
        'Key' => '/path/to/file/filename',
        'Body' => fopen('/path/to/file_to_uploads', 'r'),
        'ACL' => 'public-read',
    ));
}catch (S3Exception $e){
    echo "There was an error uploading the file.<br>";
    echo $e->getMessage();
}
?>

Deleting Files

Next here’s how to delete existing files from your s3 bucket. This uses the deleteObject method which takes up the name of the bucket and the path to the file as its argument.

1
2
3
4
5
6
7
8
9
10
11
12
13
<?php
try{

    $s3->deleteObject(array(
        'Bucket' => 'NAME_OF_BUCKET',
        'Key' => '/path/to/file/filename'
    ));

}catch(S3Exception $e){
    echo "There was an error deleting the file.<br>";
    echo $e->getMessage();
}
?>

Listing Buckets

Lastly here’s how to get a list of buckets that are currently in your Amazon Account:

1
2
3
4
5
6
7
<?php
$result = $s3->listBuckets();

foreach ($result['Buckets'] as $bucket) {
    echo "{$bucket['Name']} - {$bucket['CreationDate']}<br>";
}
?>

Conclusion

That’s it! In this tutorial you’ve learned how to work with Amazon S3 from within your PHP applications. Specifically, we’ve taken a look at how to upload files, delete files and list buckets.

Resources