Blog Archives

A quick tip with .Net generics

Posted in Code, Inside TFG

Generics constraints in .Net can be recursive. This means that you can use a type in its own generic constraints. Let’s look at an example of where this can be useful.

Let’s say you have some kind of persistent object IEntity. To avoid primitive obsession we are going to create a type safe Reference<T> object to act as a pointer to our entities, rather than just having a property of int called Id.

where TEntity : IEntity
// Actual interface doesn't matter

We want a base entity to inherit from, which among other things exposes an IReference<T> to itself.  We can’t be much more specific than returning an IReference<EntityBase>, since we can’t know the subclass type at compile time. Unless we hail to the generic recursion gods.

EntityBase<TSelf> : IEntity
where TSelf : EntityBase<TSelf>
IReference<TSelf> Reference { get { ... } };

Now we just supply the type when we declare our subclass:

MyEntity : EntityBase<MyEntity>

You can do much the same thing in Java, but it’s not quite as safe since MyEntity extends EntityBase<OtherEntity> will compile just fine.

As an exercise to the reader; consider the visitor pattern, where we implement a virtual Accept method in order to have compile time type knowledge of this. Can you now write a non virtual Accept method?

A look at Cayley

Posted in Code, Inside TFG

Recently I took the time to check out Cayley, a graph database written in Go that’s been getting some good attention.

Cayley Logo

From the Github:

Cayley is an open-source graph inspired by the graph database behind Freebase and Google’s Knowledge Graph.

Also to get the project owners disclaimer out of the way:

Not a Google project, but created and maintained by a Googler, with permission from and assignment to Google, under the Apache License, version 2.0.

As a personal disclaimer, I’m not a trained mathematician and my interest comes from a love of exploring data. Feel free to correct me if something should be better said.

I’ve seen Neo4j.. I know GraphDB’s

Many people exploring graph databases start with Neo4j and conceptually it’s similar but in usage terms there is a bit of a gap.

Neo4j has the Cyper query language which I find very expressive but also more like SQL in how it works. Cayley uses a Gremlin inspired query language wrapped in JavaScript. The more you use it the more it feels like writing code based interactions with chained method calls. The docs for this interface take some rereading and it was only through some experimentation that I started to see how it all worked. They can be accessed via the Github docs folder. I worked my way through the test cases for some further ideas.

Another major difference is that in Neo4j it’s a bit of a gentler transition from relational databases.  With Neo4j you can group properties on nodes and edges so that as you pull back nodes it feels a little more like hitting a row in a table. Cayley, however, is a triple / quad store based system so everything is treated as a node or vertex. You store only single pieces of related data (only strings in fact) and a collection of properties that would traditionally make up a row or object is built through relationships. This feels extreme at first as to get one row like object you need multiple traversals but over time for me it changed how I looked at data.


As an example (ignoring the major power of graph databases for starters) we might have the question “What is user 123’s height”. In Neo4j we can find a person with id 123, pulling back a node with that person’s name and height. We can then extract the height value. In Cayley you could find the persons id node and then move via the height relationship to the value 184. So in the first case we are plucking property data from a returned node. In the second we collect the information we want to return. This is more a conceptual difference than a pro or a con but it becomes a very clear difference when you start to import data via quad files.

What is an  n-quad?

As mentioned Cayley works on quads / triples which are a simple line of content describing a start, relationship and finish. This can be imagined as two nodes joined by an edge line. What those nodes and relationships are can be many things. Some people have schemas or conventions for how things are named. Some people are using URLs to link web based data. There is a standard that can be read about at

A simple example might be from the above:

"/user/123" "named" "john" .
"/user/124" "named" "kelly" .
"/user/124" "follows" "/user/123" .

When is a database many databases?

One of the tricky parts of a graph database is how to store things. Many of the graph dbs out there don’t actually store the data but rather sit on an existing database infrastructure and work with information in memory. Cayley is no different as you can layer it upon a few different database types – LevelDB, Bolt, MongoDB and an in memory version.

An interesting part of this is the vague promise of scaling. Most graph databases start off the conversation with node traversal, performance, syntax but they almost all end in scaling. I think Cayley is now entering this territory. As it moves from a proof of concept to something that gets used more heavily, it’s acquiring backends that can scale and the concept of layering more than one Cayley instance in front of that storage layer.

One think to keep in mind is performance is a combination of how the information stored and accessed so put a fast graph db in front of a slow database and you’ll average out a little in speed. For my testing I used a built in leveldb store as it is built in and easy to get started with.

Show me the graph!

One of the first issues I had with Cayley was not 100% knowing how to get graph to page. Neo4j spin up was a little clearer and error handling is quite visual. Cayley you have to get syntax and capitalisation just right for things to play nicely.

Lets assume you have the following graph:


Node A is connected out to B,C and D. This can be described in a n-quads file as:

"a" "follows" "b" .
"a" "follows" "c" .
"a" "follows" "d" .

If we bring up the web view using a file with that content we can query:


Running it as a query should give you some json:

  "result": [
      "id": "b",
      "source": "a",
      "target": "b"
      "id": "c",
      "source": "a",
      "target": "c"
      "id": "d",
      "source": "a",
      "target": "d"

Swap to the graph view, run it again and you should see a graph. Not all that pretty but it’s a start.


So what’s happening here? Starting at ‘A’ and calling it “source” we traverse joins named “follows” that go out from A and take note of the end node calling it “target”. Be aware that the source / target is case sensitive and if you get it wrong you won’t see anything. When I say “calling” what I mean is that as the nodes are being traversed it will “emit” the value found with the name provided as the key. This is building up the JSON objects with each traversal as a new object in the returned list.

Doing more

So now we have the basics and that’s as far as a lot of the examples go. Taking things a little further.

I recently read an article 56 Experts reveal 3 beloved front-end development tools and in doing so I came across entry after entry of tools and experts. My first reflex was where are the intersections and which are the outliers.  So I decided to use this as a datasource. I pulled each entry into a spread sheet and then ran a little script over it to produce the quads file with:

"<person>" "website" "<url>" .
"<person>" "uses" "<tool name>" .

and for each first mention of a tool:

"<tool>" "website" "<url>" .

The results was a 272 line quads file with people, the software they used and the urls for the software.

From there I started Cayley with the usual command:

cayley http --dbpath=userreviews.nq

So what next? We can find a product and see who is using it:

g.Emit(g.V('sublime text').In('uses').ToArray())

Which results in:

 "result": [
   "stevan Živadinovic",
   "bradley neuberg",
   "sindre sorus",
   "matthew lein",
   "jeff geerling",
   "nathan smith",
   "adham dannaway",
   "cody lindley",
   "josh emerson",
   "remy sharp",
   "daniel howells",
   "wes bos",
   "christian heilmann",
   "rey bango",
   "joe casabona",
   "jenna gengler",
   "ryan olson",
   "rachel nabors",
   "rembrand le compte"

Note I used the specific emit of the array values to avoid a lengthy hash output.

Sure that’s interesting but how about we build a recommendation engine?

Say you are a user that is a fan of SASS and Sublime Text. What are some other tools experts using these tools like?

// paths that lead to users of the tools
var a = g.V('sass').In('uses')
var b = g.V('sublime text').In('uses')

// Who uses both tools
var c = a.Intersect(b).ToArray()

// What tools are used by all of those people
var software = g.V.apply(this, c).Out('uses').ToArray()

// Convert an array to a hash with counts
var results = {}
_.each(software, function(s){
  if(results[s]==null){ results[s]=0; }
  results[s] +=1;

// Remove search terms
delete results['sass']
delete results['sublime text']

// Emit results
g.Emit({tools: results, users: c})

Here we are:

  1. finding the people that use sass and sublime text
  2. finding all the tools they use
  3. counting the number of times a tool appears
  4. removing our search tools
  5. emitting the results as the response

This gives us:

 "result": [
   "tools": {
    "angularjs": 1,
    "chrome dev tools": 5,
    "jekyll": 1,
    "jquery": 1
   "users": [
    "bradley neuberg",
    "nathan smith",
    "adham dannaway",
    "wes bos",
    "joe casabona",
    "jenna gengler",
    "ryan olson",
    "rachel nabors"

Note how Cayley is pretty happy for us to move in and out of JavaScript and that underscore.js is available by default. Handy. Also I returned a custom result construction with both the results hash and the users it was derived from.

So this isn’t necessarily the most efficient way of doing things but it’s pretty easy to follow.

I think for many, the fact that Cayley uses a JavaScript based environment will make it quite accessible compared to the other platforms. I hope to keep exploring Cayley in future articles.

Reflecting on RubyMotion Experiences – Part 2

Posted in Code, Inside TFG, iOS, RubyMotion, Tips and Tricks

As part two of our series, Tony Issakov offers a few thoughts when developing on RubyMotion.


Whilst I spend a lot of time in a management role, I’m a developer at heart that cannot stop developing. Here’s a few things that I’ve come across in the RubyMotion space that may be of use.

1: Know the code you are building on

RubyMotion is a relatively young space that is filling quickly with enthusiastic Ruby developers. New gems are coming out regularly to carry over what we know from our native Ruby world and also to make new iOS capabilities more comfortable to access.

One thing to be aware of is that being a new space there are some fairly fresh pieces of code being integrated into common use and some of them haven’t had much time to mature. For this reason I suggest taking a moment to get to know the gems you are about to use.Octocat

Just as with any code, Github gives us a good place to start, checking out the most recent commit activity, the scale of the issues and hopefully checking that there’s a test suite. Whilst testing isn’t as fully fledged for RubyMotion, an attempt to test is a great start.

Reviewing how code is written has also been very informative. If you want to get some diverse exposure, start looking through BubbleWrap, the ever growing mixed bag of RubyMotion functionality. You can see anything from how to leverage the Camera through to observers with the Notification Centre. It gave me some ideas as a ruby developer of what iOS topics I needed to start researching.

2: Memory Matters

One major change moving into the RubyMotion space from a Rails one is that it’s no longer a stateless environment, pages aren’t a regularly discarded entity and what you do over time can mean something. If you don’t know about reference counting in iOS and the commonly mentioned ARC, it’s worth doing a little homework to understand what RubyMotion is doing for you. Apple provides some documentation explaining memory management, here.

One example of why it’s good to know this is I hit a show stopping moment when I started attaching view content from an 3rd party framework to my own controller objects using instance variables. The external library counted on the releasing of those objects as the app moved through multiple sessions and I was inadvertently retaining them. This ended up in some interesting crashes and the word ‘release’ is a real give away.

A protip here (offered initially to me by Jordan Maguire) was to leverage the dealloc method. If you override a class’s dealloc method, clear up your instance variables, put in a bit of logging whilst you are there and then call out to super, in theory your RubyMotion console should give you a bit of feedback that your app is being healthy about releasing it’s memory.

Another key object to figure out for this topic is WeakRef.

The need for WeakRefs comes up when you start passing delegates around and begin to form cyclic references which if not handled well, can in the least cause memory leaks. Wrapping an object in a WeakRef object gives you a programmatic way of ensuring you release an object and again look to the console for that dealloc feedback.

3: Think ‘Performance’

One of the major benefits of RubyMotion is taking a lot of ruby ideas for making code easy to write. One catch is that a lot of layers of abstraction can create the opportunity for a performance hit.

We saw this first hand when first trying gems like Teacup (a gem for layout and styling). When the gem was pretty young, people using it noticed their apps start to grind and scrolling through tables suffered a stutter. This came down to doing things in a programmatic but performance expensive way when styling table cells. From what I’ve seen many of these issues have been resolved and that has come down to both gem improvements and better patterns for developers applying code in a performance friendly way.

One paragraph that really stuck in my head on this topic was reading through the Queries section of the CDQ gem README. CDQ is a slick Core Data helper and the paragraph reads:

Core Data is designed to work efficiently when you hang on to references to specific objects and use them as you would any in-memory object, letting Core Data handle your memory usage for you. If you’re coming from a server-side rails background, this can be pretty hard to get used to, but this is a very different environment.

This sums up my very first moments of walking into RubyMotion from rails which was iOS persistence is handle by Core Data therefore Core Data equals ActiveRecord. We keep pushing the point but it’s not that Core Data isn’t ActiveRecord, its that things like persistence and what it means to each environment are very different.

4: IDE is not a bad word

Vim versus Emacs? How much finger twister can you play to do fairly amazing things with your editor? I’ve been sucked into this a few times over the years and will admit I find myself in the vim space largely because it was the editor I was raised on. In recent times I followed the Textmate to Sublime migration too. For a time though I found myself in the Java community working with IBM’s Application Developer and that’s where I came to terms with what an IDE is.

When I started to explore RubyMotion and got sucked into the “What editors can I use next?” game, I dabbled with RubyMine and was a little surprised. IDEs for me in the past have meant memory bloat and user interface lag but the JetBrains guys have done a great job optimising resource usage and letting you customise behavior.


Why bring this up for RubyMotion? For many who are looking for some form of visual assistance, a nice refactoring capability, a debugger that is interactive, a spec runner that is visual, this might be a good tool for you to consider to give you a safety-net as you develop. This is absolutely not for everyone but I generally take all the help I can get and regularly swap back and forth between command line and visual tools depending on the task at hand.

5: The simulator is not the device

The iOS simulator is rather amazing in what it offers. A highly performant version of the device that you can swap between device types, screens, resolutions and even simulate events with. With all this it does lure you into believing it’ll be an effortless trip to the device but we found there are a few catches.

The first was that sometimes the simulator outperformed the phone and this is due to the simulator having the full resources of the host available to it. A few of our animations that were smooth on the simulator stuttered slightly and it was during a series of changes that it occurred.iOS_Simulator_User_Guide__About_iOS_Simulator

Another situation was when using external Objective C libraries, it’s possible for the library to have different branches of code depending on the environment meaning that the code you run in the simulator is not necessarily the code you will run on the device. In one extreme case we actually needed to set some custom build flags for the app to even compile for the device.

So the recommendation here is run on the device and frequently enough that if the app has some unusual explosion you aren’t left wondering which of the many gems you just added or commits you just made has cause the issue.

RubyMotion: Under the hood

Posted in Code, iOS, RubyMotion, Tips and Tricks

Some time ago I was working on a RubyMotion app and was called over to look at a colleague’s screen only to find an amazing visual.

Just as Firefox jumped on to the scene with a 3D view of a web page, the team at RevealApp presented to me an exploded view of one of our RubyMotion iOS apps. 3D rotation of many wired frame borders and the ability to click through the views to review settings was amazing. Since then a few new players have come along so here’s a quick recap of how you might see what’s going on under the hood.

1: SugarCube

As a very light weight entrant to the field, SugarCube is a RubyMotion gem that provides a lot of syntactic sugar and utility methods. It includes a nice little command called ‘tree’. This was one of the first mechanisms I ever used in gaining insight into how my app was being put together and is still a bit of a reflex when digging around in the console.

This means of seeing the UI structure in code might be a little harder to interpret at first but it’s nice that without any other frameworks or apps you can see what’s going on.


So to sum it up, it’s a console tool with a super easy install and requiring no external software to review the results. This is a great place to start debugging your views.

2: Motion Xray

Stepping up the visual feedback is Motion Xray. This is the only gem I haven’t personally used but I’ve included it as it’s purpose is to get insight into the current view of the app, in the app itself.

This brings a great level of portability as there’s no need for bridging between external software and internal frameworks. It’s all just in the app. It does make me a little nervous that to view my view code I’m changing my view code but I can see the niche that this plugin aims to fill.


3: Frank and Symbiote

This one really surprised me. Working with Frank is something we’ve been dabbling with for years and it’s definitely growing on me as I feel the need to gain more confidence in how my user interface is behaving. In the past I’ve used the calabash console to help me understand how to access view components for my tests but I recently stumbled on Symbiote which is part of Frank.

Frank opens up a communications gateway for sending tests to the device or simulator and Symbiote piggy backs this getting a full view of what the interface looks like on demand. This in itself is impressive but it then renders that out to a webpage with an interactive console.

This tool is tailored towards making writing tests easier but I loved that it was a means of seeing my app state with just a browser on the side. My experience with it so far has been limited but there is definite potential.

Check out this article (Inspect the State of Your Running iOS App’s UI With Symbiote – Pete Hodgson) for a really great overview of this.

Frank is very easy to install, and so in turn was Symbiote.


4: Reveal App

This is where the excitement began and at the fully fledged highly visual editor end of the spectrum. A separate app is run to do all the viewing and editing. A framework gets included in your app to open up a bridge for communication (much like Frank).

I found in the early beta stages when I was heavily using this, there were occasional connection issues. The framework broadcasts its presence via Bonjour so you should see your device or simulator appear in the list of possible connections. This type of connection process (when it worked) was nice and simple when moving between device and simulator as there were no config files or settings to worry about.

Once in the app with your screen wire framed and ready for editing, the ability to see and change things is phenomenal. Anyone who is used to tweaking the visuals of a web page at a browser console will feel right at home with this kind of tool. Nudging UI by pixels, changing colouring, messing with opacity. All of these are ready to go.

The only downside to this product has been it’s final price. The licensing is not cheap but so far in my experience this is by far the most powerful tool of its kind.


5: Spark Inspector

After loving Reveal App I took a quick moment to see what else was out in this space and was stunned to find another contender. Spark Inspector at this time feels like a lighter weight version of Reveal. It’s not as fully loaded with features and modifiable fields but it does have a lot of the key parts like a very visual 2D and 3D representation of your app.

I found that the cocoapod installed without any issues, the connections worked first time and generally this was actually a little easier to get going than my early RevealApp experience. The main area of weakness at this time is that not everything is as easy to access and edit as I found in Reveal. It does feel like you can get a little out of sync with the remote UI and it has a few more general quirks as you modify values.

The major redeeming factor to this is it’s price. At time of writing Reveal cost a bit over four times the price of Spark Inspector so if you find Reveal is out of your budgetary league, this may be an alternative.


6: iOS Hierarchy Viewer

As a last minute entrant I was really impressed to stumble over this git repo that looks to be doing things a lot like Symbiote using a web page as the external viewing tool. Looking over the Readme it feels like the install will be harder to get through than Spark or Reveal but there’s a cocoapod and it turned out to be rather painless.

The UI is raw in appearance but comprehensive in details. It feels very much like an insight into the state of the UI rather than the editable side that Reveal gives you.

One surprised was the Core Data addition which with an additional line of code gives you a quick view of the state of your data. Having recently been using cdq I tested this and it worked just as expected showing me a table of my data. This is a very interesting addition putting that little bit more at your finger tips but the lack of edit on the views does make this app more about insight than nudging visuals into place.

In Summary

It’s wonderful to see such a diverse set of tools becoming available to developers. Between a RubyMotion console and the many tools on offer, a developer can get a quick understanding of the visual architecture they are working within and even nudge it in the right direction before making a final change. Given we at times rely on the default apple controls and views it’s also good to understand exactly why things are placed where they are or how many views really do make up a button.

As I was writing this article I found this stack overflow thread covering this topic and picking up pretty much all of the above mentioned tools so if you are looking to hear how some others have found these tools, this may be a place to start.

Also as one closing pro-tip – don’t run too many of these together as not surprisingly my app got a little unstable when spinning up the simulator and multiple apps all tried to start up servers and broadcast messages. Also keep in mind that running the specs instance of a RubyMotion app might clash with your main app if you are swapping back and forth.  If things start to misbehave you might need to restart your simulator or close down apps that are in the background.

If you know of any other apps that haven’t been discussed, let us know.

before_action an anti-pattern?

Posted in Code, Featured, Ruby on Rails

Some background

@d = 25

Here at The Frontier Group we have recently started using Rails 4 for our new projects and even migrating a couple of older ones. It’s taken a little while, but we feel that it’s been out in the wild long enough, giving a chance for most of the major bugs to be weeded out. Amongst many of the new features, one is before_action which is a new name for the trusty old before_filter. It’s my opinion that renaming it was a bad move as it encourages misuse. This is a controversial opinion to hold; even within TFG. But in any case, before jumping to the comments to tell me of my errors, let me make my case.

In the beginning, I believe the intentions of the humble before_filter were pure. They were to provide a method to prevent an action from ever running; effectively filtering the action before it runs, hence the name. This seems to be supported by the ActionPack README from 1.2 up until 3.2. As of 4.0 that README becomes quite sparse. If you don’t feel like looking at seven year old documentation, examples of before_filter are used to invoke methods such as :authenticate, :cache, and :audit. Suspiciously missing are examples using before_filter to load instance variables such as before_filter :find_post. In fact, the examples of how ivars are used to link the controller and template look like this:

def show
  @customer = find_customer

def update
  @customer = find_customer
  # more stuff down here

I suspect that the abuse of before_filter started, or at least became popular when gems like CanCan started to emerge. For those unfamiliar, CanCan provides a method called load_and_authorize_resouce which does pretty much exactly what it says it does. It loads a resource then authorizes an action upon it. Should the current user be un-authorized to perform the action, it doesn’t get executed. Presumably much the same way as before_filter :authenticate from the ActionPack Readme would do. With one caveat, it also loads a resource into an ivar of the same name. This leads to our new controller looking like this:

load_and_authorize_resource :post

def show
  # @post has been set CanCan

def update
  # @post has been set CanCan
  # more stuff down here

Coupled with the ease provided by CanCan, and one of the most overused acronyms and default go to response that I’ve seen since working with Rails (that’s DRY btw), this idea exploded. Note that I actually have NO data to back this up; it’s just speculation. Regardless of the actual cause, I now see code like the following:

before_filter :find_posts, except: [:show]
before_filter :find_post, only: [:show]
before_filter :find_commenters, only: [:show]

def index()

def my_posts()
  @posts = @posts.created_by(current_user)

def show()


def find_post()
  @posts = Post.find(params[:id])

def find_posts()
  @posts = Post.all

def find_commenters()

This is an anti-pattern and absolutely terrible code, all for the sake of DRY. At least in Rails 1 through 3, there was an indication that you were doing it wrong via the method name due to the lack of any form of filtering. Now with before_action it seems to be encouraged.

The reasons

First off, the reasons arguing that it’s good.

For: It’s DRY

This code is very DRY. There is no repetition to be found here. In fact there is nothing in the methods at all, so there is nothing to repeat. Of course the benefits of not duplicating code are well documented and proven. If there is a bug in that code, there is only one place to fix it; saving time, effort, and you’re note going to forget to fix it in *that* other place.

For: It’s the Rails way

Using before_filters in this way seems to have become the rails way, with rename to before_action adding legitimacy. There is something to be said to doing what other people expect. It means that they can come into your work and know exactly whats going on. And indeed that’s true, deviating from convention can lead to some level of confusion. So if you intend on doing something other than the convention, you should have a sound reason.

On that note, why do I dislike the current usage?

Against: It’s not anymore DRY

Just above I mention that using DRY as a reason to use before_actions. And in fact DRY is a go to reason for many things, before_actions not withstanding. The only issue is that you are still repeating yourself with before_actions. Note those pesky only: keys, they violate DRY as exactly the same amount as calling the method in your action. You just swap what you’re repeating. In one case you repeat the action name, in the other it’s the name of the filter (using the term lightly) method.

I would propose not setting the variable inside that method, and in which case you do end up repeating the variable name. But you don’t have to, in order to stop using before_action. Compare the two code samples below:

before_action :find_post, only: :show

def show

is equivalent to:

def show

In fact, it’s even shorter doing it in the action!

Against: Except is terrible

Admittedly you can get away from repeating the action names by using except: over only:. Black lists always leave a nasty taste in my mouth when it comes to coding. They presume you know any future uses, or can ensure that any future maintainer is aware of the list and it also needs updating. You can’t be sure that your before action isn’t going to blow over the results of another before action (see my point on side effects below), or admittedly a less sinister action of loading records that aren’t required.

Note that omitting both only: and except: is the same as adding except: []

Against: They abstract the flow of the action from the developer

It’s quite obvious they occur before the action does; after all it says it in the method name before_action. What isn’t obvious is how they play with each other. Looking at the example above, note where the order of execution is defined and where it actually matters. Also all actions need to have consistent input parameters, see how show doesn’t have any control over which post to actually show.

Against: They elevate the live time of the variables

This ties pretty closely to my previous reason. Code Complete discusses the concept of live time (hopefully that link works for you all). The basic concept is that, the further variables are defined from their usage, the more difficult the code becomes to maintain.

@a = 2
@b = 3
@c = @a + @b

The value of @c should be obvious to all. However what about @c + @d? How easy was it for you to say 30?

Against: They have to cause side effects

I dislike side effect causing functions. There I said it. I’m a side-effectist. Every chance to I get to eliminate one is a little personal victory. Eric Evans has a pretty nice explanation of their pitfalls in his book Domain Driven Design. I’m not going to preach benefits the benefits of side effect free functions, except to say that any method that changes state introduces a chance that, that method will be used without knowledge of that side effect. On their own, there is nothing wrong with side effect causing methods, we can’t do our job without them. However they do compound complexity, as you need to understand the side effects of every method in the call chain. You should consider if you really do need to modify state in a method before doing so. To make matters worse, these methods often have un-assuming names, like find_post, which provides no indication state will be changed.

Against: Actions must rely on the side effects of other methods

By using a before_action to configure state, you remove that responsibility from the action. However the sole purpose for an action to exist is to configure, and maybe work with, that state. In effect you are robbing the action of its only job. The first place you look for action code is the action itself. It is not acceptable that a developer should be expected to have to search the entire controller, and any it inherits from, to discover how/why an action is/isn’t working.

In summary

I would love to see the use of before filters/actions returned to their (in my opinion originally intended) use of preventing actions from executing. And use of them solely to load data banished to the annals of history. Code such as the following, despite being slightly longer, reads far better, and is easier to comprehend and maintain:

def show
  @post = find_post
  authorize!(:read, @post)
  @commenters = find_commenters_on_post(@post)

def update
  @post = find_post
  authorize!(:update, @post)

  # update it!

Search Posts

Featured Posts



View more archives