Logging from headless Nerves machines to Papertrail

In my spare time I’ve recently been learning some Elixir, and I was excited to come across the Nerves project, with which you can create a Raspberry Pi firmware image that essentially boots directly into the Erlang BEAM VM. It’s an impressive system that works really nicely for embedded systems development, which is one of Erlang’s traditional strengths. The project I’m working on is controlling the Unicorn Hat HD, a little 16×16 LED grid, from a Raspberry Pi via the Elixir/ALE hardware-interfacing library. You can see my project’s source on GitHub, for what it’s worth, though it’s still not functional.

One potential roadblock when trying to develop on a remote Raspberry Pi is figuring out how to view console output from the device, especially if it’s running headless. After poking around some a bit I found a solution that seems to work well and was easy to set up: the Pi ships all of its logs to the Papertrail log-aggregation service, which has a free tier that should be adequate for anybody doing hobbyist development.

Below are the instructions for doing this. As a note, I’m running on the latest versions of Elixir, Phoenix and Nerves (1.5.3, 1.3.0, and 0.8.3 at the time of this writing), and I’m using the excellent nerves_init_gadget library to get mDNS set up on the Pi and to push firmware updates to it over ssh.

Also worth noting: my project is set up as a poncho project, as recommended in the nerves documentation. I’ve got a directory ui which contains the Phoenix web server code, and then a parallel directory fw containing the configuration and setup for the Nerves system itself.

Sign up for papertrail

This bit is straightforward: navigate to the Papertrail home page and sign up for a free account. After you sign up you’ll be at the “add systems” page, which gives you a hostname and port you will use to connect from Nerves. The page shows you an “install script” which you shouldn’t run – it’s for forwarding syslog events to Papertrail, which is not what we’re doing here.

Add the LoggerPapertrailBackend library to your dependencies

The LoggerPapertrailBackend library sends messages from Elixir’s standard Logger API to Papertrail. In my development cycle I typically run Phoenix on my local machine until things seem reasonably stable, and then build and push the firmware to the Pi when I want to test it out on its eventual destination.

During local development, all the console output is right there in the terminal so we don’t particularly need Papertrail. Therefore, we set up the Papertrail stuff in the fw project, so it will run on the Pi but not locally.

Per the instructions on github:

Add the dependency to fw/mix.exs. I added it under the deps(target) section, which seems to work fine.

def deps(target) do
  [
    # [...]
    {:logger_papertrail_backend, "~> 1.0"}
  ] ++ system(target)
end

Configure the logger

You’ll want to set this up in your firmware project’s config/config.exs.

config :logger, :logger_papertrail_backend,
  url: System.get_env("PAPERTRAIL_URL"),
  level: :debug,
  format: "$metadata $message"

config :logger,
  backends: [ :console, LoggerPapertrailBackend.Logger ],
  level: :debug

Two things to note here:

  1. We’re sending :debug logs to Papertrail. This is helpful to get started, but a lot of the debug output from Nerves is more chatty than useful – in particular, you’ll see a lot of chatter from the nerves_init_gadget mDNS server and the WiFi modules that you probably won’t care about.
  2. We’re storing the actual Papertrail URL in an environment variable and then plugging it into the configuration at compile time. This corresponds nicely with how Nerves controls what targets you build for via environment variables (MIX_TARGET, etc). Submitting logs to Papertrail over UDP doesn’t seem to require an API key, so keeping the hostname and port out of your public source code will hopefully prevent Joe Internet from writing stuff to your logs.

Set up your environment

This is basically as simple as:

 export PAPERTRAIL_URL='papertrail://logsX.papertrailapp.com:XYZZY/my-name'

Here we’re constructing a (fake) URL with the hostname and port that Papertrail gave us after we signed up. The my-name bit can be anything you want; Papertrail  uses it to distinguish different applications connected to the same log sink. You’d probably want to vary this name to distinguish between production and dev builds, different servers of a multi-server system, etc.

In practice, I’ve got a file in my home directory ~/nerves-build, which I source in order to set up environment variables I need to build the firmware. Mine currently looks something like this:

export MIX_TARGET=rpi3
export NERVES_NETWORK_SSID="(the ssid to my wifi netowrk)"
export NERVES_NETWORK_PSK="(the password for the same)"
export PAPERTRAIL_API_TOKEN='(my papertrail API token, see below)'
export PAPERTRAIL_URL='papertrail://logsX.papertrailapp.com:YYYYY/narwhal.rpi'

About the Papertrail API token, it’s not actually needed on the Pi itself, but it’s useful for viewing logs in the console via Papertrail’s CLI tools, see below. You can find yours on the Papertrail site under “Settings / Profile”.

Build and deploy the firmware

Given the above environment variable setup, building the firmware is as easy as running mix firmware from the fw directory. You would typically then get it onto the Pi by inserting an SD card, running mix firmware.burn, and popping the SD card into your Pi. If you have nerves_init_gadget set up, you can instead deploy over ssh by using mix firmware.push my-rpi-mdns-name.local, which is less of a hassle.

Fire it up and watch the logs

You’re ready to go, connect the power and wait for some logs to show up in the Papertrail event viewer. With the above configuration, you should at least see some mDNS :debug events in there if everything is working properly.

Papertrail also has a handy CLI interface you can use to see the logs it’s ingesting in realtime, if you prefer a console UI to a web UI. You’ll need to set up the API token as above, then you can just use papertrail -f to view your logs as they happen.

Tone it down

Once you’ve verified that the connection works, you’ll likely want to set the log level of your :logger_papertrail_backend config settings back to :info so you aren’t drowning in mDNS logs. (Actually, ideally you’d want :debug logs from your own application and :info logs from everything else. I’m sure there’s a way to do this, but I’m not sure how to do it quite yet.)

Advertisement

Adventures in client-side routing with re‑frame and OAuth

It’s been a while since my last post; one of the few disadvantages of working at a company you like a lot* is that as your work projects get more interesting, the urge to hack on external stuff in the interim diminishes. But having had some downtime and considerable anxiety to burn off in the last week or two, I revived one of my old open-source ideas: Haunting Refrain, a little single-page app that gets your Foursquare check-in history and then builds a random Spotify playlist out of random data from the places you’ve been to.

This is still very much a work in progress, but I’m pretty pleased with its direction and it uses a lot of intriguing tech. It’s all written in ClojureScript, and it uses re-frame for the basic single-page app control flow and UI bits. re-frame was written by Mike Thompson, probably Clojure’s greatest living essayist, and is an excellent package somewhat in the elm / redux / FRP-if-you-squint vein. I’m also now using datascript to retain more domain focused data and posh to wire it up to the UI elements, about which more later.

In a previous post, well over a year ago now, I talked about some difficulties I was having with dealing with external OAuth-style authentication in single-page apps. Having obtained a great deal more experience with client-side routing since then, I managed to solve this fairly quickly in Haunting Refrain (if you don’t count the time I spent bashing my head against similar problems while at work as trying to figure it out, I guess).

My current set-up uses pushy to handle the HTML5 history setup and sibiro as a routing library, largely because it’s the only client-side routing library whose README I can read without the risk of developing a migraine (honestly, client-side routing should not be very complex, what’s the deal?). I define a big route table which uses keywords for every route; each entry has a URL-matching pattern and a reference to a reagent component which will be used to render the given page.

On the re-frame side I keep a value in the app-db called :route/current-page which keeps track of the keyword matching the current URL. Whenever the URL has been changed, whether by the user landing on the page for the first time or from a link being clicked or the app itself redirecting the user, pushy will dispatch a new re-frame event of the form [:route/changed route-keyword route-params-if-any]. The handler for that route just persists those two values in the app-db, and then in the view code there’s a subscription which pulls out the two values, checks the big route table for the component matching the current route keyword, and renders the component, passing it the page parameters in case it needs them.

So that bit is pretty straightforward, and I’m pleased with how declarative the routing table winds up. For the spotify and foursquare OAuth callback pages, I’m cheating a little bit. When the user needs to authenticate, he or she will be redirected to Foursquare, will hit the “allow access” button, and will then be redirected back to a specific callback URL on Haunting Refrain. Back on the site, pushy parses the callback URL as :foursquare/hello, and the component which is associated with that route renders a blank page and dispatches an event when it is mounted into the DOM. The handler for this event parses the OAuth access token out of the URL, saves it in the app-db, persists it into HTML5 LocalStorage, and then redirects the user back to the home page at :main/index.

The LocalStorage bit is an interesting side track. Since the user winds up seeing a full page refresh whenever he or she is redirected for authorization, the application essentially has three entry points where it needs to construct the user’s state from scratch: on the home page, and then on the authorization callback pages for both Spotify and Foursquare. Since, in theory, we need to store access token for both services, the volatile re-frame app-db is not going to cut it. In an earlier version of the application, you could authenticate to Foursquare, get an access token, then authenticate to Spotify, at which point a full page refresh wiped out the Foursquare token.

Persisting stuff to LocalStorage wound up being pretty easy, and relies on the new re-frame effect and coeffect system. I have a re-frame effect :persist! which, when returned from an event handler, will write a value to the browser via the hodgepodge library. A matching :local-storage coeffect pulls it out of the browser, and I’m using this during app initialization to seed the database with any previously-retrieved access tokens. This works great, and should make it quite easy to persist arbitrary data across application invocations.

Overall I’m quite happy with re-frame. I’ve found that it can be a little difficult to track the control flow once a system gets reasonably complex, since you’re essentially encoding most of it by means of constructing a large bespoke FSM, but it does a fantastic job of keeping the control flow and display logic separate, and it is fairly easy to tinker with.

I’ll have more to say about Haunting Refrain in the days to come—the routing stuff isn’t actually one of the more interesting parts of it, the datascript and posh bits are. The thing is still buggy as heck and not deployed anywhere, but it’s runnable locally. Check it out!

 

* Workframe – we’re hiring!

I wrote a thing: cljs-datepicker

I spent a little time today messing with date pickers for my eventual single-page app, and wound up with something general enough that it seemed worthwhile packaging it as a library.

To wit, I published cljs-pikaday, a ClojureScript interface to the Pikaday JavaScript date-picker. So far it just has a reagent interface, but I still plan to add an Om interface and a re-frame interface, at least. You can see an online demo at github.

After having worked with a lot of websites spawned from lein new templates in the recent past, I found the eventual project.clj file for the library to be shockingly small. It winds up being very easy to publish a library on clojars (as a caveat, I still need to test the published artifact in a separate project to make sure I didn’t mess everything up – I know my published jar has an extraneous directory in it, for one thing). I found the easiest way to develop the library was by starting with a lein new reagent template, getting the basic functionality to work, and then moving the generated stuff into its own template, save the actual library files.

The interface itself seems to be adequate for my needs, if not perfect. You can basically pass arbitrary atoms in to the pikaday/date-selector component and have the date-picker synchronized with the atom, so that user selection updates the atom and atom updates change the selected date of the picker.

I’m still puzzling over what a humane re-frame interface would look like. One could easily dispatch methods using the existing callback interface, but I’m not really sure what the best way to express subscriptions that the picker should listen to is.

Authentication, state and single-page apps in ClojureScript

I’ve been spending a lot of time recently working on a single-page app in ClojureScript, most recently using the recently-released, and impressive, re-frame project, which builds an FRPish unidirectional data flow on top of the ClojureScript react interface reagent. By a single page app, I mean one with almost no server-side code, which could theoretically be served from a static HTML page, and which only lives at a single URL (modifying the #fragment part of the URL to navigate between logical “pages” in the app, as is the vogue these days).

I’ve tried out a few different approaches to writing this app, including with Om and plain reagent, and re-frame seems to have just the right combination of simplicity and abstraction for my taste. With reagent I sometimes get the feeling I’ve had in backbone (JavaScript) projects, that there’s not really very much structure in the framework and I have to make a lot of stuff up on my own, which is fine but raises the question of why I’m using a framework at all. Om seems pretty neat, and definitely has a structure to it, but I’m not crazy about the ways it relies on async channels to communicate things between components. (In fairness, I’ve probably spent the least amount of time with Om, partly because the tutorials for it seem a little esoteric.)

At any rate, the app as it currently exists is pretty simple. There is a home page, from which the user is prompted to log into an OAuth service (in my case, foursquare). When the user hits the link, he or she will be redirected to foursquare to authorize my app, after which he or she will arrive back at my site with an authorization token in the URL (this is the “callback URL”, cf foursquare). I need the token in order to pass it along to foursquare for API requests I subsequently make to get the user’s check-in history and so forth.

With a full-stack application this would be pretty easy – my server-side code could look for the callback URL, and when it’s found it could grab the token, redirect the user to a known “thanks for logging in” page, and pass the token along to the user’s browser in a cookie, or embedded on the page, or though any number of other well-known server/browser mechanisms.

With a purely client-side app, however, the situation is trickier. In general, the user is expected to stay on a single web page the entire time he or she is using the app, without refreshing; application state sort of accumulates over time in the DOM and JavaScript object models on the page. But since the user needs to redirect to foursquare’s site in order to authenticate, my application now effectively has two entry-points: one when the user first navigates there, and one when the user returns from authentication. Indeed, since in the long term I want my app to authenticate against at least one more OAuth service, and probably several (spotify, twitter, etc), it will have an arbitrary number of entry points.

This complicates managing user state, since the user’s previous state (as reflected in the page’s object model) will be completely destroyed when he or she leaves the site for authentication. Upon the user’s return, the sum total of his or her state will essentially consist of the callback URL, including the authentication token. This is somewhat manageable with a single OAuth provider; when the user hits the callback URL, our client-side code can stash the token in the page’s object model somewhere and then use something like history.replaceState() to modify the URL back to the landing page. With more than one OAuth provider this approach is problematic, since the state will disappear when the user hits the second provider.

So we need a way to persist information between page refreshes. The two most obvious methods are cookies and HTML5 localStorage, and I will be using localStorage (or sessionStorage) because it is new and shiny. With that said, there are still two viable approaches I can see to this.

  1. User chooses to authenticate to foursquare, is redirected to foursquare, and then arrives back at the site with an empty app state and an auth token in the URL. The client-side code stashes the auth state in localStorage and uses replaceState() to navigate back to the home page.
  2. When the user hits the “log in” button, we open a popup. In the popup, the user is redirected to foursquare. When the user comes back to the popup page after authentication, the popup sets the token value in localStorage and closes itself. When localStorage is set, this will trigger a "storage" event in the parent window, which can then react by updating its state (to say “thanks for logging in” or the like).

There are a lot of things I like about the second approach. Because the original page persists while the popup is open, it can just trundle along as it had before, waiting for the storage event to fire; this simplifies state management in the parent window. However, it has a two considerable drawbacks:

Firstly, desktop browsers tend to block popups invoked from javascript these days, and honestly, thank god they do. There is probably a way to work around this, maybe by using an <a target="_blank"> tag or the like, but most of my experiments so far have triggered the popup blocker in my browser.

Secondly, popups kind of suck in mobile browsers. They work, more or less, but they don’t feel native to the mobile experience.

In passing, I’ll note that using an iframe might also be a technical solution to the two above problems, but I don’t really want to because (a) the user should see the address bar in an OAuth situation to validate that they’re not on a phishing site, and (b) ick, iframes.

So it seems like a straightforward redirect is the way to go. In my next installment, I’ll dig a little deeper into what this means for state management in the app, how that works with re-frame, and into client-side routing in ClojureScript generally.

Yet another language

In the interest in keeping this blog somewhat regularly filled with content, I should note that I’ve been working on an interesting new project at work in my spare time. It’s a little game and I’m writing the back-end in Clojure, using http-kit for its websockets capability. I’m trying to figure out the best way to shoehorn core.async into it – it seems like a natural fit for websockets.

I also gave a little introduction to Clojure and ClojureScript to a technical user group at my job; the slides are up on github (well, really the source to the slides in AsciiDoc, but there are build instructions on the site, or you can just view the deck as markdown on github).

It’s interesting to me how different the programming approaches of Clojure and Scala are, despite them sharing a great deal of aesthetic ideals (in particular, immutability and good interoperability with the JVM platform).

Three quick Scala plugs

I’m still thinking about the dependency-injection and Akka stuff and will have at least one more longish post on the subject, but I’ve recently become distracted by refactoring some of my old JavaScript code into ClojureScript and figuring out Clojure’s new core.async library (ultimately I hope to form all of these distractions into a ring instead of a straight line, at which time I will be the acme of productivity, but that’s another story). But before I get too deep into Clojure land I wanted to plug a few Scala-related things I’ve come across recently.

Firstly, Derek Wyatt’s book Akka Concurrency: Building reliable software in a multi-core world is excellent, covering a great many Akka topics in an enjoyable style. In particular, his chapter on testing seems profoundly relevant to the mock injection topics I’ve been thinking about, but I haven’t quite absorbed it yet. The whole thing is refreshingly up to date, too (which makes sense since it was only published a month or two ago), and it doesn’t pad out its length with a lot of “learn Scala in 30 days” remedial material. Anyways, I recommend it thoroughly.

Secondly, John Sullivan’s long post about the cake pattern is by far the best treatment I’ve seen of it online, and is required reading for anyone interested in dependency injection in Scala. (It’s been up for several months now but somehow I missed it until recently.) John has written a dependency-injection framework, congeal, which makes instant intuitive sense for me as someone coming from a Java / Spring background; unfortunately it depends on some macro stuff which won’t make it into Scala’s mainline, so it isn’t ready for prime time and will need to be rewritten down the road once Scala’s macros reach their next stable state. There’s a video from ScalaDays 2013 describing the framework.

And finally, following up on the subject of the cake pattern, Daniel Spiewak’s keynote from NEScala, “The Bakery from the Black Lagoon,” is an excellent talk which made me think about the cake pattern in a new way (as more of a compiler-enforced module system than as a form of dependency injection). His implementation of the cake pattern is also interestingly different from most examples I’ve seen online – in particular, he mostly eschews self-types, with the exception of needing one for a virtual class.

More on Akka and dependency injection

Just as a quick follow-up to my previous post, I thought I’d note that the official Akka blog has published a post regarding Akka and dependency injection (kind of weirdly expressed as a mini white paper, as though the internet at large had issued an RFP for an actor-based concurrency system with dependency injection).

While, as I mentioned before, I should emphasize that I’m by no means an expert in Akka or actor best practices, I’m not convinced that this document addresses my particular concerns with the intersection of Akka and dependency injection. I’m still thinking about a larger post breaking down the approaches to this topic that I’ve seen online, but this one sort of falls into the “mock stuff outside of Akka” camp.

The document has two main points: firstly, if you have an existing dependency-injected service, you can pass along a factory which knows where to find it to the Props constructor of an actor, and there’s a way to attach a DI application context to an ActorSystem to support this, in what seems like a pretty convenient way. Secondly, if you need to expose an interface from actors to an existing system based on a DI framework, you can include an ActorSystem singleton in your DI object graph, and then expose a sort of regular-object facade over it which finds specific actors and returns either ActorRefs or futures resulting from sending ask messages to them.

That’s all well and good, but it seems more like a way to integrate between Akka and an existing synchronous DI-based system than anything that makes dependency injection useful or usable inside a purely Akka-based system. (In particular, the document’s unfortunate final section seems to be aimed squarely at recalcitrant middle managers who need to be convinced that a move to Akka will not result in a whole bunch of now-legacy code needing to be tossed out.) While I’m not incredibly interested in this topic myself, I thought Akka already had a talking point for this integration problem in the form of “typed actors“.

The bit that I still haven’t seen addressed is that if Akka likes actors to explicitly manage the lifecycles of other actors they supervise, there doesn’t seem to be any room for the inversion of control that is the hallmark of dependency injection frameworks in the first place. To put it in more concrete terms, if I’m running a partial integration test of my simple notification service and I want it to have a real database actor and mock REST web service actors, how can I tell the actor that supervises all the HTTP worker actors to create mock actors instead of live ones?

I have a half-formed idea of how this could work that involves having a sort of service locator / factory actor which is responsible for actor instantiation, but the idea in my head doesn’t particularly jibe with Akka’s supervision hierarchy, which as far as I can tell is coupled very tightly to actor instantiation.

Testing Akka: actors, dependency injection and mocks

I’ve been digging into what the expected way is to test my small Akka system, as described in my previous post on the subject. I think my problem partially arises from being unclear as to the proper mode of dependency injection in Akka. In other words, I don’t know what the proper way is for my Root actor to obtain a reference to its Database and HTTP sub-actors. Does it create them itself? Look them up from a service locator? And what if I need to inject mock actors into the system in some parts in order to test it?

Various bits of Akka documentation suggest different approaches to wiring actors together; for instance, this page in the official docs suggests either passing dependent actors as constructor arguments, creating them in a preStart() method, or passing them to their referring actors in a message and using become() to switch between initialization states. This example from the testkit docs takes the latter approach, but I can’t say I like the result:

class MyDoubleEcho extends Actor {
  var dest1: ActorRef = _
  var dest2: ActorRef = _
  def receive = {
    case (d1: ActorRef, d2: ActorRef) =&gt;
      dest1 = d1
      dest2 = d2
    case x =&gt;
      dest1 ! x
      dest2 ! x
  }
}
/* ... */

val probe1 = TestProbe()
val probe2 = TestProbe()
val actor = system.actorOf(Props[MyDoubleEcho])
actor ! (probe1.ref, probe2.ref)
actor ! "hello"
probe1.expectMsg(500 millis, "hello")
probe2.expectMsg(500 millis, "hello")

This does seem to work, but it seems to me that it pollutes the actor with a bunch of test-related code that probably doesn’t belong in production (by which I mean the receive pattern which takes the two dest parameters).

I have found an interesting take on this question in this presentation by Roland Kuhn, introducing akka-testkit from Scala Days 2012—the entire presentation is worth watching, but the part I’m interested in starts at around 22:05 or so.  After a not terribly helpful note about how if you have difficulty injecting mocks into your code, then there is probably something wrong with your design (there may be something to that, but it’s not all that helpful to hear when you’re looking for a solution for injection), Mr. Kuhn mentions a third option for users of the (then-new) Akka 2.0: actors can use actor path naming to look up their dependent actors; the test ActorSystem can then supplant the real implementations with mocks at the same locations.

Of course, all of this sort of assumes that you have a way of separating out actor creation and lifecycle control from dependency injection itself. A lot of the other Akka literature I’ve read seems to posit the integrated lifecycle management bits of Akka as a feature, right down to the “Let It Crash” maxim on the public Akka blog, and all of these features seem to be in direct opposition to the inversion of control notions that most dependency injection systems are founded on. In the last part of Mr. Kuhn’s talk above, he suggests breaking up actor models into somewhat discrete trees, which then use service locators or similar things to find one another; this might be something I can look into.

There was also a talk at this year’s Scala Days about integrating Spring and Akka, which might have some merit for this purpose, and I recently ran across this promising post which describes an approach to autowiring actors with Spring and Akka 2.2 (in Java). Overall, though, this doesn’t seem to be a problem with a clear solution.

Adventures in Akka

My current technical interest, mercurial as ever, is in Akka. My present employer is mostly a java shop, but they are open-minded and I have a notion to prototype out a rewrite of a simple system there into Akka and Scala.  The system is probably one of the simpler ones we have, known as the “notification service.” It periodically checks for new rows in a particular database table.  If it finds any, it fires off a JSON-formatted request to a REST web service, the “delivery service”; if it gets a successful response from this service it will mark the message as delivered in the database. There are a few wrinkles related to locking, and there are actually a few different web services involved, but that’s pretty much the basics. Something possessed me to make a diagram of the existing flow: 

Image

 

The purpose of this system is to deliver notifications to particular users, with the idea being that any subsystem which needs to send a notification to someone can put the right data in the database, where this system will pick up up and hand it off to an existing REST service which winds up doing most of the heavy lifting.  The existing service is implemented in Java and Spring, using Quartz as a cron job to kick off a polling method once every 30 minutes or so (we don’t need this service to run particularly swiftly).

It’s not really hard to see how this would translate into a message-based actor model in Scala.  You’d probably have one root actor coordinating things.  You’d have an actor talking to the database, maybe with a supervisor to restart it as needed, and you’d have another actor to handle the HTTP client calls.  Most likely the client actor would spawn off a new actor per individual row of data, and have each of these worker actors make a single HTTP request. On a success, the worker would send a message back to the database actor to update the database row as “completed”; on a failure the worker might just log an error and die.

A rough sketch of that might look like this (pardon my sub-par OmniGraffle skills):

Image

Note that the single-line arrows here represent the actor supervision hierarchy, not message-passing.  I’m also not positive that the “DB Worker” actor needs to exist, versus just having the “Database” actor do the work, but it simplifies things to do it this way and I suppose there might be more than one of those (more on this later).

I’m been struggling a bit to come up with a good way to represent message passing in a diagram, but I think I’ve got the gist of the design in this one:

Image

Everything is started by Tick messages which are sent to the root object every 5 seconds via Akka’s scheduler interface (this would be more like 15-30 minutes in production).  This causes the root actor to pass a PollDatabase message to the Database actor; the message includes a reference to the HTTP Client actor.  For each notification row the Database actor finds in the database, it sends a Notify message to the HttpClient actor.  This actor composes a MakeRequest message to one of a pool of Worker actors, including the data from the database and a reference to the Database actor.  The Worker performs the HTTP request; if it is successful it sends a RequestSucceeded message to the Database actor, which will ask a DB Worker thread to update the database to mark the relevant row as successfully delivered.  If the Worker gets an error, it sends a RequestFailed message to the HTTP Client actor, which at this point will just log the error and continue on.

I will have more to say about this, but this post is already decently long, so maybe I’ll leave this here so I can refer back to it later. There are a few things I’m struggling with:

  • Despite having read a lot of articles and blog posts on the subject, it’s not obvious to me what the correct way to instantiate and connect these actors is (constructor arguments, preStart() methods, dependency injection, etc).
  • Related to the above, it’s not clear to me how to test this system without mixing up test code and business logic.  In particular I’d like to replace the nodes in yellow above with mock objects and verify that the system still works properly.
  • I would like to have a reasonable interface to Oracle, without needing to include Spring or something in the project. The Typesafe, Inc solution is to use Slick, but I don’t have a burning desire to sell my co-workers on closed-source, commercial software in addition to a new language and framework. 

I’ll have more to say about all this in the days to come.

cbfix

It’s been a busy several weeks for me, mostly in my work-related universe.  I haven’t been completely disregarding the public sphere, though, and managed to get a CBR / CBZ utility script working, to wit cbfix.  Currently it only does one thing, but having the scaffolding around to open up CBR / CBZ files, mess with them, and replace them presents a lot of possibilities. In the meantime I still have a bunch of CoffeeScript and Scala stuff simmering on the back burner.