Workinonit with Rails: riding high, building and passing pitfalls
Sunday 14 March 2021 / coding
In the third major independent project of the Flatiron software engineering curriculum, students are tasked with building an app using the Ruby on Rails framework. The app should be some kind of content management system incorporating complex forms managing associated data with the help of RESTful routes.
I decided to build a web app that allows users to manage their search for a job without having to deal with spreadsheet software, disorganised notes or uncoordinated chaos. With Workinonit - the name an ode to extraordinary music producer J Dilla - users can:
- find jobs by keywords (e.g. job title) and location
- add jobs by URL or manual entry
- keep track of progress on job applications
- save and review feedback from and notes on companies
Riding the Rails
Moving to Rails for this project required learning a lot of new syntax and ways of working, but the more you immerse yourself in its "convention over configuration" approach, the more you can start to focus on only the customisation that actually matters to your project.
Approaches that differed from the Sinatra framework included:
- separating out route definition from controller actions, and implicit rendering. This leads to cleaner-looking controllers, free from all the route specification and calls to render templates where they follow standard RESTful conventions
- helper methods and before actions. In the Sinatra app I'd made significant use of before hooks, but in Rails you can define private methods for use both within the controller and before any controller actions fire using the
helper_methoddeclaration also allows you to be more explicit about which methods to make available to views
- using Rails helpers in controllers but particularly views. The most useful of these are arguably things like
link_tocombined with path (or URL) helpers, allowing you to reference paths to particular routes by name rather than by typing the actual route out everywhere - much DRYer! In the end, I decided to go all out and write 99% of the views using Rails tag helpers. This might be overkill, but it does feel pretty nice, and leads to a more consistent syntax than would be the case with more of a mix of raw HTML and Rails helpers
Building on before
The streamlining offered by Rails provided more space to build on and extend existing skills and knowledge. The project have me a chance to solidify my grasp on how to apply the model-view-controller paradigm within real domains. Within this project, I tried to think more carefully about the separation of concerns principle. Where in the last project I had a bit of an blurred distinction between the roles of my models, controllers and views, with some heavy custom SQL querying going on in the controller and in some cases some significant data manipulation in the views, this time I tried to keep that to a minimum. Models handled the majority of data-focused logic and controllers flow-based logic, with views handed more instance variables with data properly prepared, leaving a much smaller amount of logic focused on hiding/showing elements based on the data being passed from the controller. I'm still not 100% confident the balance is quite right - perhaps the concerns could be even more straightforwardly separated, but it feels like a significant step forward from the last project.
In the last project, all but one model either belonged to users, or belonged to a model that belonged to users. In this app, there's still no real user-to-user interaction, but I thought about how some data might not need to be unique for every user, which in a production environment may save database space by avoiding duplication of data, and allow for potential future expansion into user interaction experience. I made almost all pages require login with the exception of scraped webpages, meaning users could share a job they've found. Having jobs not belong to a user, and having one record per scraped job listing no matter how many times it's scraped, means that there will only ever be one URL tied to that job... unless some key details change, leading to unique records for a single job. This also offers the potential for tracking how many users have saved/are interested in a particular job, which could open up opportunity for further interesting informational features down the line. An entity relation diagram representing the app's models is below; you can also open the diagram as a PDF.
I also built on my knowledge of:
- scraping - I decided to incorporate the primary focus of the first project as a feature of this one, only this time with shorter, more reusable methods with more specific jobs
- CSS, this time using SCSS to streamline specification of styles using complex hierarchical selectors, as well as incorporate variables (making it much easier to manage a colour theme) and even a function
- ask for confirmation before deleting data
- play a J Dilla track (adapting the script I used on CS50 problem set project Nihongooo!)
- prepend "https://" in URL fields if not provided
- require at least one checkbox in a form before submission
- require at least one input - URL or textarea - to have data before form submission
- tidy up the URL after OAuth authentication, getting rid of ugly suffixes like "#_=_" or just "#" from Facebook and Google
- toggle a feedback field that should only be available for unsuccessful applications
- how often to commit. I was aware that in previous projects I'd committed too little, often providing long, multi-line commit messages describing a bunch of changes. This time, I tried to keep changes down to what I could usefully describe within one succinct commit message. I didn't always succeed, and sometimes the commits felt too small, but it feels like a step in the right direction
Production pitfalls, deployment dangers
As per the last project, I decided to deploy the app to a production environment using Heroku. This brought with it a number of challenges that I had to overcome in order to get a working product.
The first was getting the PostgreSQL database set up. This took a while, but this time I managed to get it working fairly consistently in development, test and production environments - next time I might try using PostgreSQL from the start rather than switching at the point of deployment
The second challenge was getting Heroku to build the app successfully. Heroku errored out of the build process, saying there was a problem with precompiling assets. This was a bit of a red herring - I ended up reading a number of threads on issues with CSS/SCSS file extensions and other stuff related to the contents of the asset folder, when the actual issue was the approach I was taking to providing app IDs and secrets in the OmniAuth config. In the end I needed to switch from using credentials.yml.enc to environment variables in .env locally and config variables in Heroku.
Once the build passed, when trying to open the app I once again faced Heroku's H10 errors, which provide very little detail about their cause. With very little information, it's hard to know where to look for solutions - H10 errors can be almost anything. In the end, I discovered a help page from Heroku which suggests specific code for your puma.rb file in the config folder, which differs to the default Rails code, as well as a different Procfile bootup command to the one they suggest on their main guide to getting started with Rails 5 on Heroku. Unclear or incomplete guidance that makes too many assumptions of the reader seems to be a recurring issue... but after making those changes, the app launched successfully!
The last issue, and a minorly app-breaking one, is that scraping can work quite differently - or not at all - when working outside of a local environment. In local testing, everything was (is) working great, but in the live environment, scraping produced a bunch of internal server errors. The main issue here was websites denying permission to scrape their pages. Not content leaving a negative experience for the hordes of users waiting to manage their job applications on Workinonit, I spent an extra few hours working through these issues, eventually adding in some error/exception handling within the scraper class. This logic rescues the app from crashing if it encounters errors when scraping content. I'd already implemented
end exception handling for user-provided URLs, so this was mostly a case of taking that and extending it to deal with other scraping features. Having more but smaller scraping methods made it a bit easier to address these issues without rewriting existing logic. Unfortunately these HTTP errors mean that scraping works inconsistently for two of the three providers - sometimes they deny requests and sometimes they don't - and these are the only two that provide proper non-UK job coverage, but at least the internal server errors are fixed!
Below is a demo of the app, which is hosted on Heroku and can be accessed via workinonit.yndajas.co (you will be redirected to Heroku).