Next article
Why I'll Only Use Basic Frameworks In The Future
Previous article
Project Update 3: Revenge of the Job Search
Oct 03 2016

Redis and Me

Memcached vs. Redis, why do we argue about this?


Now, before we get into a React vs. Angular like war I use both, but this is a post specifically about what I like in Redis as this is something I'm using currently. In the realm of in-memory data stores there's a growing number of entrants: Redis, Memcached, MemSQL, MongoDB's in memory store (coming soon), etc.

In a few recent small projects I needed a fast store for both temporary data, but that could also act as a queue. Enter Redis, an in memory store that allows queues, static value storage, error resiliancy, and clustering for availability. So how did I use it? Lets dive in:

The primary purpose for this need came from a security requirement. The client could be one of many potential devices, React web client, React Native mobile app, REST API calls, or GraphQL calls. So the decided method for authentication was JWTs. If you don't know what a JWT is open that link in a new window, do a bit of reading and come back, otherwise we're going to move along. In the prohect there was a requirement where those JWTs needed to be signed with a key. Now we could have just allowed the JWT Node library figure it out, but that wouldn't work in the distributed setup that was the target as each server might, and likely would, have a different signing key. The simple answer would be to assign a fixed key across the board right? Well enter security requirement number 2.

Security requirement number 2 was that these keys need to rotate every 24hrs by default, and that needs to be configurable to be other values as needed. Also, in an emergency, a manual key reset needed to be available should an issue be discovered. So lets recape that:

  1. We're using JWTs for authentication
  2. JWTs are signed using a given key
  3. In a distributed cluster each auth service and consumer needs to have access to the signing key at the moment
  4. We need a way to keep all these services in line

The Solution

Now, after doing some other requirement checks I found the look of Memcached didn't quite fit what I thought we needed. It was missing some of the features I wanted, mainly pub/sub for real time data reading, and the ability to have all the store saved to the hard drive to allow for hard restarts if needed. So here is the flow:

  1. A small Node process sits in the background and is in charge of generating signing keys
  2. The process generates a random key and stores it in Redis
  3. The needed parts of the application connect to Redis and read the value as needed

Now you might think that's a prime example of what Memcached is best at! BUT we needed pub/sub as well. Why? Well the application also was asked that it could see who was logged in, their respectful JWT, their last action to the server, their last connection IP, and of course a timestamp. This needs to be fed into a pub/sub system where everytime it changes it can be sent to a client who is watching activities. This could be a system auditor, a security audit, or any number of uses. There was also the need that we could invaildate a JWT on demand, this is also done via Redis.

My Experience

So far this has been possibly the simpliest technology integration I've worked with. It was also a very small hardware requirement. So far, in virtual load tests, we haven't gone over 100MB of RAM usage per Redis instance. Now it's not a huge application, but it's not tiny either so this really impressed me. Also we use Docker as our deployment system and this just worked. Little to no messing around. It just worked. Overall this has allieviated a number of issues we foresaw and gives us lots of room to grow going forward.

Our Alternative Options

Now this wasn't our only option, we also looked at RabbitMQ + Memcached, and RabbitMQ + MemSQL combinations, but the positives in the tech stack didn't match the negatives. Normally in both cases we'd be looking at higher hardware requirements, and the Node packages just didn't seem to match the level of ease and coverage that the Redis package did.

If we were working with MongoDB as out primary database, then there is a strong chance that we'd have gone that route. MongoDB's enterprise in-memory store looks very promising, but considering we are primairly backed by MariaDB (MySQL) then it's kinda overkill to impliment a complete alternate database just for an in memory datastore.

In the end we are VERY happy with Redis so far, but we are always looking at what is coming out so we'll see what comes down the pipe, but for now Redis seems like the obvious best choice for our use and it shows it can do so while being efficient.


Enjoyed this post? Let me know on twitter or grab some RSS.