Functional Programming

I’d like to take a moment to document some recent realizations I have made regarding best practice when developing and testing applications. I’m finding that employing Redux and functional programming paradigms at work is really leveling up my software development skills.

These are exciting times!

Prefer data streams and functional programming where possible

Many in the online community contend that functional programming (FP) and object-oriented programming (OOP) should not be mentioned in the same breath. Let’s put that notion to rest. These paradigms are not mutually exclusive.

FP can live within an OOP context and act as the engine where the most process-intensive actions occur. Concretely, this may be where raw API data is translated and processed by the front-end for rendering. These methods would be working directly with the data stream, and as such, should be pure and without side effects.

In practice, these functional methods should abide by typical OOP best practice principles (single responsibility, open/closed, Liskov substitution, dependency inversion), but also be “pure” in that data goes in and comes out, anything passed in is unaltered, and there are no side effects (interaction with DB, API, or other third-party systems). In other words, these methods would be a breeze to unit test (and should most certainly be unit tested).

All this said, FP can also stand on it’s own without OOP. If you can and would like do that, by all means do that. I like to think that OOP provides context for FP methods to live within. It is in the “non-pure” OOP space that architecture is applied and (non-FP) real-world interactions may occur.

Making the leap to functional programming

We all know how powerful the combination of loops, conditionals, and recursion is in programming. It’s natural for a seasoned programmer to become upset or defensive when FP best practice says, “stop looping.” Before these functional programming concepts gained mass audience, we acted in space only, so looping across elements in space was obvious.

Take an array, loop through it’s elements, accomplish something. Let’s call these arrays in space something. Let’s call them conventional arrays.

Looping in this way is not possible when working with newer tools like RxJS. Subscribing to an observable yields an array of data in time. Manually looping entails using some sort of loop index, which plants it firmly in space.

With looping comes power and flexibility. The good part is you can create a custom solution inside the loop to solve the problem at hand. The bad part is you can create a custom solution… inside the loop… to solve the problem at hand. Any time you come across a loop, you are compelled to read it to figure out what it is doing.

At some point, you may have come across higher-order functions. In JavaScript these are map, filter, and reduce. You may have started using them because they save you some keystrokes – you don’t have to manually loop across your data when you use them. These methods act on arrays using their own looping construct.

A tool like RxJS has similar methods that abstract out space making them function across time… across a stream. Let’s call these arrays in time something else. Let’s call them a data stream.

If you find yourself using these higher-order functions in typical JS projects that do not employ RxJS, you are already on your way to practicing functional programming. Using map, filter, and reduce also has the added benefit of informing the reader what the goal of the code is at a glance without having to understand everything that is occurring in the loop.

Yes, but should I employ functional programming for my thing?

Short answer: if you can, yes. Long answer: it depends… like every other thing in programming.

Questions to consider:

  • Are you already working with a data stream?
  • Is your thing process-intensive?
  • Is there already architecture in place? Does this code work with a data stream, or does it work with conventional arrays?
  • Can I even create “pure” functions for my thing or is the whole point of it to interact with other classes (following some pattern or architecture)?

If you answered yes to all of that, yes please functional!

NEON Spike

This write-up was prompted by my product collation work. We were wondering if we could decrease the time it took to filter a list of 750+ products.

The product collation service was architected using a conventional OOP paradigm. Conventional arrays are used heavily, but with higher order functions doing the process-intensive looping. It was relatively straightforward to replace the native JS filter with an RxJS FP filter method and call it to accomplish product filtering.

I saw a medium post that showed promise in improving process speed by simply swapping filter methods in this way. I decided to give it a try. I convert the product array into an observable, subscribe to it, filter against it using RxJS, then pass it back as a conventional array.

I performed benchmarks for both filter methods (native JS and RxJS) with respect to processing on page load and again for the processing that occurs during dynamic product filtering. I did this for product listings on industries large and a small. Here are the results:

JS RxJS Δ
Avg filter processing speed during load (ms) 0.2945 0.5153 -0.2208
Avg dynamic filter processing speed (ms) 0.3257 0.5761 -0.2504

While there may be gold in them hills when working with millions of small objects, processing speeds seem to worsen when filtering hundreds of very large objects using RxJS. I would be interested to see what might happen if we rearchitected the existing OOP solution to work directly with the data stream instead.

Obviously, that’s a pretty significant refactor whose results may or may not be better and whose solution may or may not be easier to read or be more maintainable than what we currently have.