Workinonit with Rails: riding high, building and passing pitfalls

Sunday 14 March 2021 / coding


In the third major independent project of the Flatiron software engineering curriculum, students are tasked with building an app using the Ruby on Rails framework. The app should be some kind of content management system incorporating complex forms managing associated data with the help of RESTful routes.

I decided to build a web app that allows users to manage their search for a job without having to deal with spreadsheet software, disorganised notes or uncoordinated chaos. With Workinonit - the name an ode to extraordinary music producer J Dilla - users can:

Riding the Rails

Moving to Rails for this project required learning a lot of new syntax and ways of working, but the more you immerse yourself in its "convention over configuration" approach, the more you can start to focus on only the customisation that actually matters to your project.

Approaches that differed from the Sinatra framework included:

Building on before

The streamlining offered by Rails provided more space to build on and extend existing skills and knowledge. The project have me a chance to solidify my grasp on how to apply the model-view-controller paradigm within real domains. Within this project, I tried to think more carefully about the separation of concerns principle. Where in the last project I had a bit of an blurred distinction between the roles of my models, controllers and views, with some heavy custom SQL querying going on in the controller and in some cases some significant data manipulation in the views, this time I tried to keep that to a minimum. Models handled the majority of data-focused logic and controllers flow-based logic, with views handed more instance variables with data properly prepared, leaving a much smaller amount of logic focused on hiding/showing elements based on the data being passed from the controller. I'm still not 100% confident the balance is quite right - perhaps the concerns could be even more straightforwardly separated, but it feels like a significant step forward from the last project.

In the last project, all but one model either belonged to users, or belonged to a model that belonged to users. In this app, there's still no real user-to-user interaction, but I thought about how some data might not need to be unique for every user, which in a production environment may save database space by avoiding duplication of data, and allow for potential future expansion into user interaction experience. I made almost all pages require login with the exception of scraped webpages, meaning users could share a job they've found. Having jobs not belong to a user, and having one record per scraped job listing no matter how many times it's scraped, means that there will only ever be one URL tied to that job... unless some key details change, leading to unique records for a single job. This also offers the potential for tracking how many users have saved/are interested in a particular job, which could open up opportunity for further interesting informational features down the line. An entity relation diagram representing the app's models is below; you can also open the diagram as a PDF.

Entity relationship diagram

I also built on my knowledge of:

Production pitfalls, deployment dangers

As per the last project, I decided to deploy the app to a production environment using Heroku. This brought with it a number of challenges that I had to overcome in order to get a working product.

The first was getting the PostgreSQL database set up. This took a while, but this time I managed to get it working fairly consistently in development, test and production environments - next time I might try using PostgreSQL from the start rather than switching at the point of deployment

The second challenge was getting Heroku to build the app successfully. Heroku errored out of the build process, saying there was a problem with precompiling assets. This was a bit of a red herring - I ended up reading a number of threads on issues with CSS/SCSS file extensions and other stuff related to the contents of the asset folder, when the actual issue was the approach I was taking to providing app IDs and secrets in the OmniAuth config. In the end I needed to switch from using credentials.yml.enc to environment variables in .env locally and config variables in Heroku.

Once the build passed, when trying to open the app I once again faced Heroku's H10 errors, which provide very little detail about their cause. With very little information, it's hard to know where to look for solutions - H10 errors can be almost anything. In the end, I discovered a help page from Heroku which suggests specific code for your puma.rb file in the config folder, which differs to the default Rails code, as well as a different Procfile bootup command to the one they suggest on their main guide to getting started with Rails 5 on Heroku. Unclear or incomplete guidance that makes too many assumptions of the reader seems to be a recurring issue... but after making those changes, the app launched successfully!

With a working app, I decided to sort out OAuth to work in the live environment. This was surprisingly easy (after having dealt with the environment/config variables earlier)! It mostly involved telling each provider the new domain and callback details. With Google, getting it working with any Google account rather than pre-specified ones will require sorting out a privacy policy and submitting it for approval, but otherwise this step didn't cause much frustration!

The last issue, and a minorly app-breaking one, is that scraping can work quite differently - or not at all - when working outside of a local environment. In local testing, everything was (is) working great, but in the live environment, scraping produced a bunch of internal server errors. The main issue here was websites denying permission to scrape their pages. Not content leaving a negative experience for the hordes of users waiting to manage their job applications on Workinonit, I spent an extra few hours working through these issues, eventually adding in some error/exception handling within the scraper class. This logic rescues the app from crashing if it encounters errors when scraping content. I'd already implemented before-rescue-end exception handling for user-provided URLs, so this was mostly a case of taking that and extending it to deal with other scraping features. Having more but smaller scraping methods made it a bit easier to address these issues without rewriting existing logic. Unfortunately these HTTP errors mean that scraping works inconsistently for two of the three providers - sometimes they deny requests and sometimes they don't - and these are the only two that provide proper non-UK job coverage, but at least the internal server errors are fixed!


Below is a demo of the app, which is hosted on Heroku and can be accessed via workinonit.yndajas.co (you will be redirected to Heroku).