Poll Results

Today, Jon happened to be in the west and decided to drop by my place to work on the project together. Started off the morning going through our progress in the past week. Fixed a couple of bugs as we were testing the app together.

We started off by creating a short link for users to answer polls. Previously, a user would have to type in this long URL in order to answer a poll:


The poll id as stored in the database is MongoDB’s ObjectId, a 24-byte hexadecimal string. This would be extremely inconvenient for typing out, especially on small mobile devices. As such, we generated a short URL that was saved with each poll as they are created. After implementing the routes, the URL users had to type was much shorter than before.


Before deciding to implement it on our own, we also tried out Google’s URL shorterner API. However, the limitation with that was that the short URL will be in the form of http://goo.gl/xxxxxx. We decided that this was not so ideal as anyone could access the click statistics that was publicly accessible to another with the link.

We also tried using ShortId and Hashids to come up with the short link, but eventually generated it randomly.

 Math.random().toString(36).substring(2, 8)

After getting this done and tested, we proceeded to work on the poll results page. Started off building the API for it, then the user interface for displaying poll results.

After Jon left, I also spent a little bit more time improving the UI of the poll results page. We decided that we should also show results in a chart, so I made use of the Google Charts API to draw the charts dynamically based on the result. After looking around, I came across angular-google-chart, an AngularJS module for Google Charts. I was able to get this working pretty quickly, but spent a lot of time trying to make the chart responsive. As of now, it displays fine on mobile, but could be drawn much bigger on larger screen sizes.

We might also consider allowing users to choose what kind of charts they would like to display (bar charts, pie charts, donut charts, etc).

Another feature we are considering is for poll results to be updated and displayed live. We are looking either at polling the server at a fixed interval, or using WebSockets so that the server can push data to the client whenever a new poll answer is received so that the UI and charts can be redrawn.

Poll Answer

Apart from working on the login redirect yesterday, I also worked on the UI for the poll answer page. Today’s work is mainly storing the poll answers to the database.

I previously chose the poll schema such that it embeds poll answers within the poll itself. I thought this might be a convenient way to store data as a single delete request would remove the poll together with its associated answers. However, I ran into a lot of problems trying to get data stored as an embedded document within an existing Poll object.

Eventually, I decided that having a separate collection for poll answers may be a more suitable option.

/lib/models/answer.js —

 * Answer Schema
var AnswerSchema = new Schema({
 owner_id: Schema.Types.ObjectId,
 poll_id: Schema.Types.ObjectId,
 answer: Number,
 updated_at: { type: Date, default: Date.now }

/lib/models/poll.js —

 * Poll Schema
var PollSchema = new Schema({
  question: { type: String, required: true, trim: true },
  owner_id: Schema.Types.ObjectId,
  active: { type: Boolean, default: false },
  choices: [ { type: String, required: true, trim: true} ],
  created_at: { type: Date, default: Date.now }

PollSchema.path('choices').validate(function (value) {
  return value.length >= 2 && value.length <= 8;
}, 'Number of choices invalid');

Envisioning the End

Some time was lost in the process of not starting immediately on the poll answer and results page as we wanted to have a better picture of how we wanted the layout to look like. Turns out that after much (or very little) thought, we decided to just go straight to the bootstrap elements and forget bout any further fancy ideas until at least getting the views up.

In the process, I discovered a couple great nifty tools for dealing with design interfaces:

While it provides real-time code collaboration, I was more concerned with the live preview as I can copy over snippets of html or the angular scripts to see if they work on templates that we were abstracting from the documentations provided.

I liked Easel for what it was: straight up user interfaces that I could grab code from for use directly. But the use case here is unfortunately limited since we had to do things the Angular way, and too much of this CSS stuff was in the way. Furthermore, it was painful to be learning the interface and the dragging and dropping just for me to come up with a design I could be happy with. Maybe in future I would find better luck for this.

I took the time to also prepare for the first milestone by doing a log cleanup on our spreadsheet. May is coming to an end after all.

Login Redirection

One issue that we have yet to resolve is where to redirect the user after a successful login. Before today, this defaulted to /app/dashboard. However, given that a user may be given a direct link to a poll for example, and the user has yet to log in, this would provide a very bad user experience as the user would then have to go back to the source and get the direct link again once he has logged in.

I had this issue in mind a few days ago, and started some partial implementation. However, I was stuck on how to redirect users back to the authenticated route they were visiting previously. After a few hours of reading and trying, I finally got it working.

The following paragraphs summarises the steps taken in implementation.

In app.js, if the next route requires authentication and the user is not authenticated, we redirect the user to the login page instead. However, we also append a query string of where to redirect the user to after login.

/a/abc123 --> /login?redirectTo=/a/abc123

The login page takes whatever is in the redirectTo parameter and includes it with the GET request when the user clicks the login button.

--> /login/nus?redirectTo=/a/abc123

A new controller was made to handle OpenID authentication, and before passing control to passportJS, we check if there is a redirectTo parameter and store it into the session if it exists. Here, we also do some simple validation to make sure what is given is a relative path, not a full address. This prevents the application from having an open redirect.

Upon completion of the OpenID authentication, the app is brought back to /login/nus/return. Here, the authentication controller checks if a redirectTo is being stored in session, and if so passes this to the successRedirect parameter for PassportJS. User would then be directed to the page they requested for before authentication.

/login/nus/return --> /a/abc123

Testing Deployment

After discussions with Jon, we finally decided to buy the feedbaker.com domain. It wasn’t my first time getting a domain, so that process was a fairly straightforward one. After paying for the domain, I went on to deploy a new DigitalOcean droplet and configured the nameservers and DNS to point to the new instance.

The next hour was spent provisioning the server. These were some things I had to install before getting started with the test deployment:

  • git
  • nodejs
  • npm
  • mongodb
  • compass (used by build scripts to compile sass into css)
  • Bower
  • Grunt

Once I got those ready, I proceeded to clone the repo, run the build script and start the server. Everything worked pretty fine, except that the OpenID return url was still hard coded to localhost on my development machine. In order to fix that, I added base_url as a new item in the config files so I can have different values for development and production. After that was done, everything worked well, and I was pretty happy with the results so far.

Spent some time cleaning up the code, fixing several warnings jshint was returning. While doing that, I noticed that jshint was throwing errors that the method confirm() was an undefined global. After reading up more on jshint, I found that I could suppress that error by adding devel: true to .jshintrc (the config file for jshint). However, after some discussion with Jon, we decided that removing the confirm() prompt would provide a better user experience.

In order to further enhance the UX, I also added a loading state when users click to activate or deactivate a poll. Prior to that, clicking on the button would make a PUT request to the API and update the UI with the expected result immediately. However, errors and failures might cause the user to think the action was successfully applied.

To end off the day, I went back to exploring how the app should be deployed. I read about placing a reverse proxies in front of NodeJS and decided to try running the app behind nginx. Also came across node-http-proxy, and might decide to give it a try some time down the road to see which is more suitable.

After spending another hour or so configuring nginx as a reverse proxy (got help from the nginx docs and wiki), I was able to get everything working. Made some changes to the app’s config file to listen only on so users cannot access the NodeJS server directly via feedbacker.com:8080.

Tried running the site through Google’s PageSpeed Insights. After seeing the results, I was advised to cache static content to make pages load faster. I went back and tried to configure nginx to cache static content.

While doing that, I found that the page was loading a non-existent vendor.css. Apparently, the build script added that file in although I did not have any vendor css files. After some investigation, this was a bug caused by grunt-usemin, which was fixed in a later version. After updating the package to a later version, I was able to resolve this issue.

Apart from all these, I also came across and made use of Fiddler — a free web debugging proxy. With that, I was able to introduce a certain amount of latency to the app to simulate real-life use. I found out that the My Polls page shows, for a brief instance, that I currently have no polls, until the APIs are loaded and the page is updated. With this tool, I made some changes to further improve the user experience. I was also able to make use of Fiddler to set breakpoints and manipulate HTTP traffic in real-time.

All in a ‘View’

It’s over a week in now, and although slightly better off than where I first started, I’m still rather lost.

Read lots and added more things into the reading/tutorial list to go through:

  • Stumbled on a question on Unobstrusive JavaScript vs JavaScript Application link on Reddit.
  • Unfamiliarity to Node.js led to: How do I get initiated? (I agree with the asker that the documentation on node.js is absolutely unfriendly for beginners)
  • Angular has a pretty steep learning curve, so I had to look for other guides.
  • If I find the time, perhaps try to draw similarities and parallels from Angular to Ember.js, which hopefully will aid the understanding of Angular better in terms of the client-end MVC.

Sat through a really long one today with Jon as he was revising the module of saving Polls to the MongoDB database. We covered everything from creating Services/Factories in Angular to figuring out the $resource feature in the API. The day was spent reading through documentation and loads of fact finding which I also intend to catch up on through the git commit logs of the code (time I’m trying to make – perhaps burn a bit of midnight oil over the weekends to get this done).

Perhaps let’s start with some of the interesting and useful content I’ve learnt about today (which may be obvious to adept developers out there).

Client side and Server side validation

I think the best way for me to explain this is to refer to the best answer itself:

Client-side validation just avoids the client from going “but I filled this all in and it didn’t tell me anything!”. It’s not actually mandatory… Server-side validation is also crucial due to the fact that client-side validation can be completely bypassed by turning off JavaScript. In a way, JS-driven validation is a convenience and an aesthetic/cosmetic improvement and should not be relied upon.

We decided to add a client-side validation to our poll creation because we felt that it would be better to use Bootstrap elements to handle the user experience of invalid inputs. However what was more important was still the fact that server-side validation is necessary as highlighted in the above quote. Without a proper way to inform the user that the invalid response is made, the user (a non-tech initiated person) may not know if the poll is created for real.

With this security and usability feature in mind, it led me to know more about a very important web application development concept of…

Securing the API

Part of securing the API involves the server-side validation as covered above. The purpose to this is because we want to prevent database manipulation by a user because of an insecure PUT request. For instance, in other applications such as one that keeps score, we do not want the user to be able to save invalid strings of information that may trigger unwanted changes to the database. This includes changing the user’s score to another score value found in the database (or simply edit it due to lax permissions).

In our case, it was important that we validate the Poll model API such that we disallow simple exploits to the database, returning us invalid strings of information when we retrieve the poll from the database.

One would also want to secure the API also because anybody who can view the page can also view each javascript file that is included in the HTML file on the page itself. That means the user would know exactly how it is implemented in some way on the front-end for the validation. This is also why Client-side validation is certainly not good enough.

Use of REST Clients to test the API

Chrome Webstore has a good app for this purpose. REST Clients are amazing because they handle all the different requests: GET, POST, PUT, and many of the other usual CRUD and non-CRUD types of HTTP requests to the server. Our poll creation module for instance started in the beginning as a PUT request that deals with the following lines of code:

"_id": req.params.id,
"owner_id": req.user._id
$set: set
}, ... 

Using a REST Client allows us to check if the request was valid and successful with a Status code 200 response (which could also be observed from the Inspector in Chrome.), and also to see if we can manipulate the data in the PUT request. In an earlier version of the code we simply used the entire req.body to the PUT request, allowing users to do things they should not be able to, which brings me to the next learning point.

Principle of Least Privileges (from Wikipedia)

The principle means giving a user account only those privileges which are essential to that user’s work.The principle applies also to a personal computer user who usually does work in a normal user account, and opens a privileged, password protected account (that is, a superuser) only when the situation absolutely demands it.

This idea also carries forward to user creation in our servers and also to ensure that we create users that have privileges for the whole ‘you-have-only-one-job’ principle. For example, when creating a user with  privileges to a folder under public_html/www, that user should only be given to the rights of www (if he/she is the web designer) to create posts or do whatever he/she wants within the contraints of that folder.

Mission Control: Twitter Bootstrap

Professor Min-yen did a Mission Control today on Bootstrap which I sat through via Hangouts on Air whilst working on the project with Jon. Funnily enough, almost all the features that we’ve been playing around in our View layers are all bootstrap UI objects. Revisiting the basics was also refreshing as we see some Sublime Text action from the screen. Got to appreciate GruntJS a lot more after watching the painful save-and-refresh process that was broadcasted.

Wrapping up

Still tons to do with regards to the project. Shall find a day to meet up and bulldoze through development as far as possible. Glad there were a handful of takeaway points from today.

Saving Polls to Database

Today was mainly spent figuring out how to save the Poll model to the database. In doing that, I learnt about AngularJS Providers and Factories, and how they are injected into controllers to get stuff done. I also learnt how to use AngularJS to perform client-side validation, and Mongoose to perform server-side validation.

When a user submits the form to create a new poll, AngularJS first performs client-side validation, then calls the createPoll() method. The controller then uses the Poll Factory to make a POST request to the API. The API routes the request to the server’s Poll controller, and performs server-side validation against the provided schema. Mongoose abstracts and simplifies this validation process. The newly created object, together with the poll id, is then returned. A callback in the AngularJS page controller is triggered, redirecting the user to the newly created poll.

I also learnt and made use of angular-moment while trying to improve the application’s user experience. Instead of displaying the date and time a poll was created, this module displays the relative time (e.g. 4 minutes ago) of creation.

I also started off on the poll details page. This page is designed for presenters to show the link of the poll for the audience to visit and answer. In order to make it easier, we planned to generate short URLs and provide QR Codes so that users do not have to type the full, long, URL into their mobile browser or laptop. For the QR Code, I made use of Google’s Chart API to draw and return an image of the QR Code for the corresponding URL used to access the poll.

We have yet to start on the URL shorterner, but will probably make use of existing URL shortening service such as goo.gl or bit.ly, or perhaps come up with our own within the same domain.

More Design Decisions

Over the weekend

Decided to take it easy. Jon went ahead to make some more minor changes on the dashboard page. And we decided we had to have some more wireframes for the page layouts. While he was hard at work thinking, I continued on a couple other things:

If the Liftoff workshop’s GAE crash course didn’t cover the idea of MVC cogently, this pretty much explained how MVC works at the AngularJS level, as Angular pretty much breathes that concept. Pretty awesome how I stumbled on this on Hacker News just in time on Saturday as well.

Read enough into the basics to cover what was necessary so I could clear the confusion with the terminologies that show up on my git client – Tower. That said, I prefer the command line now that i’ve managed to do two lines of nifty (and helpful) codes that improve the user experience:

$ git config --global core.editor “open –t –W”

$ git config --global --add color.ui true

The first one tells the git application which editor I want it to use, in my case it’s Sublime Text, and since I’m on a Mac, the second option gives me colour on the user interface on the terminal. Neat.

  • Explore existing implementations

As we needed more design ideas for the layout, considering how barebones everything seems to be on our development machines, I decided to go search around for existing polling applications that are out there online. Most of them are catered for surveys (SurveyMonkey.com) and the good ones require an account and a price plan (Polldaddy.com). Amongst them, I found Checkbox to be the most balanced, even though the use cases differ a lot from ours. Checkbox gave me more insight as to what we should include into our Information Architecture, as well as the additional features we might want to implement in future: such as to include a full fledged survey on top of our one question polling system.

Moreover, as I explored into some of the layout best practices, I also picked up on the Rule of Seven, that is one should have no more than 7 things on the global navigation bar (not that I think we are able to come up with 7 anyway).

  • More CSS and HTML, Wireframes

Read into CSS selectors and media queries as part of the Responsive Web Design package. Learnt about the box model and this amazing thing called box-sizing, as well as the float layout. Since the days of IS2102, I’ve been using Draw.io to do use case diagrams and activity diagrams amongst others, but it can do so much more. Instead of drawing with pen and paper like we did initially for our brainstorming, I’ll be using objects from Draw.io to facilitate our wireframes and design ideas. What I like most about it is the integration with Dropbox or Google Drive, which makes storing and retrieving the templates easy.


The Week Ahead

I’m certainly going to be busy with other commitments on a list of priorities, but the plan seems to be pretty well defined. Shall use the evenings to speak more with Jon on what’s going on.

We intend to purchase the feedbaker.com domain to use on a DigitalOcean droplet (yeah we’re spending money) by this week when we can get more of the layout done up – mainly so we could test on a server that doesn’t deploy from our machines. Fiverr.com is also something we’re looking at to get our graphic designs and logos and other neat sprites that we want to generate from – all for only 5 bucks.

Meanwhile, time to get on to more prototyping.

Database Schema Design

Today’s work was mainly adding the “My Polls” page to the application and modelling the database schema for the poll model in Mongoose.

One of the challenge I faced was to find out how I should associate each poll question with the many poll answers it will have. I read previously that mongoDB was somewhat better than relational databases at modelling one-to-many relationships.

For example, in WordPress (a blogging platform that uses relational databases), the posts model requires two tables (post & post_meta) in order to represent all the information. This is because a post may, for instance, contain zero to many tags — and this cannot be stored into a single table without serialization. In mongoDB, however, information like tags could be stored in an embedded sub-document.

This started me reading about mongoDB’s embedded documents. In coming up with the schema, I also learnt a bit about data model design from the mongoDB docs — when it will be better to use embedded data models and when to use normalized data models.

Embedded Data Model
Embedded Data Model – Generally better if you have a one-to-many relationship between entities
Normalised Data Model
Normalised Data Model – Generally used when the relationship is many-to-many and when embedding would result in duplication of data

Getting up to speed


Upon looking through the high-level decisions that we have made, Jon and I essentially spent the first half of the day exploring what we had to do: on his part, he was still deciding on the framework and what we could do to optimise for the quickest possible delivery given our time constraints. On my end, I was more focused on getting up to speed.

I spent a large part of day one familiarising myself with the languages which I suppose would have the longest serving effects in development. Jon and I initially thought of developing on Ruby on Rails during Liftoff, only to find out that the learning curve was far too steep for the both of us to achieve the goal of 8th June. As such, we had to redirect our attention and focus onto more manageable technologies. As such I had to pick up JavaScript, Git, and the Web fundamentals at lightning speed.

Point of Entry

I figured I had to start somewhere. Since we decided to get the front-end user interface done up first (because I supposed learning HTML/CSS was going to be slightly easier along with JS), I began by learning JavaScript from scratch at Codecademy. Syntactically simple, it was a breeze up till the point where I realised Node.js isn’t just JavaScript, and neither is anything else that is essentially a JS framework. Thankfully I stumbled upon three things I found most valuable in getting myself initiated on web development:

It taught me everything I needed to understand and know about responsive web design and the best practices in creating page templates. Moreover the nice site UI as well as the readily available source gave me a bird’s eye view which I could cover easily in just under 2 hours.

I found Duckett’s book to be an incredibly beautiful resource on learning HTML and CSS, although resources are plentiful all across the web. I also used sites such as learnlayout.com and MDN’s references to get me initiated into the foray.

This site is a godsend. After learning up the basics of JavaScript, this was exactly what I needed to get familiar with the Node.js technology, considering the large code base that we’re going to maintain with modularity. I’ve yet to cover all the articles that are listed/compiled with great posterity, but I’ll make an effort to digest the gist of what is important to get started quicker.


Spending half of the day on material I certainly couldn’t finish, Jon got me started on the scaffolding of Feedbaker’s architecture on Yeoman.io. Google Hangouts has aided our collaboration immensely and has also helped relieve a lot of the anxiety and frustration over not being able to communicate effectively through instant text messaging.

Although there wasn’t much of a syllabus, I was very quickly initiated to the inner workings of how Grunt.js and Bower.io works. Two applications which I found immensely powerful and useful in maintaining and managing the code that was to be deployed. We explored a number of other things, which includes:

  • Express.js’s Architecture
  • The glory of AngularJS – partials, views, controller, and routing and the entire workings of the bower components that manages all the packages.
  • Bootstrap as a web UI front-end framework
  • git reset HEAD --hard: the only way for me to not mess things up
  • Touched a bit on cryptography: Salt and uni-directional MD5 hashing (reason for this is because we were going through the authentication process, which was resolved using Passport.js package on npm)

Also, while Jon did most of the code, I followed each line written very closely on hangouts (something that drove my MacBook Air’s CPU crazy as I displayed the screenshare on a larger external monitor). Makes me feel guilty about not having any git pushes so far.

Yeoman did save our lives by providing the skeleton that we were dying to figure out on the first day (which led to quite a bit of worry). Decided to drop nitrous.io because it wasn’t being friendly with the premature shutting down due to inactivity (seriously?)

Also, I came up with a brief information architecture to outline what we wanted to include in our view layer. More to come as we refine along the way.