A Response to the Reactive Manifesto

The Reactive Manifesto published by Typesafe is getting a great deal of attention in the Scala community.  A new version was posted yesterday and I could not help but reply with criticism that has been bothering me for some time.  The Reactive Manifesto appears to be a thinly veiled way to back into Akka being the cure for all ills.  I think this is an irresponsible message especially from a company like Typesafe that is guiding a great number of people that are just venturing into Scala.  One sure way to hurt the Scala community is to offer the advice that one should adhere to a reference architecture that is best applied for only a certain set of problems.  My comment/reply to the Manifesto appears below in its entirety.  I hope that, at a minimum, it stimulates some thought for those that were intending to jump into the Akka pool without fully understanding why.

Jonas:

I am the VP of Engineering at LeadiD a tech startup based in Ambler, PA. We are building out a large, complex stack for servicing web transactions at scale. We already process billions of web service calls each month. We are gung-ho on Scala even though it’s new to us here. But I must confess that so far I’ve felt the Reactive Manifesto to be a bit, if you’ll forgive me, contrived. When I read it, it seems that we’ve all been suffering from ADD (Akka Deficit Disorder) and we just didn’t know it. But actors (which are at the heart of Akka) are not the cure for everything. They are a great solution for certain types of problems.

Allow me to comment on the system attributes in the Manifesto:

Responsive - An essential attribute. This is a function of proper provisioning. There are many ways to achieve this.

Resilient - An essential attribute. Most mission-critical systems ensure this using a load balancer and a cluster(s) of servers. Does not require the use of actors or supervision hierarchies.

Elastic - An essential attribute. Akka does have a sweet spot here. But can be achieved using elastic provisioning techniques by rolling your own in a DevOps manner or by using third party tools.

Message-Driven - This is not an essential attribute. You’ve fast-forwarded to a solution, here. It’s a means to an end and it basically describes actors. Increasing asynch is very important as a piece of the puzzle. But synchronous processing of a request that does not lend itself to being split up or parallelized still has its place.

There are some complexities with actors:
  • Actors are a mechanism to solve a distributed computing problem - that is, when a given request is best split up across a number of machines. When a given problem does not lend itself to distributed computing it’s not a great fit. Many CRUD functions (even at scale) out there are served best by a cluster of nearly identical, load-balanced app servers each of which is self-contained.
  • Unless you have a distributed computing problem location transparency is not a good thing. I want the caller to know he’s making a network call vs. an in memory lookup.
  • Given the buzz people will gravitate to actors because it appears to be the latest / greatest. But unless their problem merits it they’ve opened up new cans of worms. If the work can be handled in a single process actors might not be the right choice.
Rich Hickey says it best:

“I chose not to use the Erlang-style actor model for same-process state management in Clojure for several reasons:
  • It is a much more complex programming model, requiring 2-message conversations for the simplest data reads, and forcing the use of blocking message receives, which introduce the potential for deadlock. Programming for the failure modes of distribution means utilizing timeouts etc. It causes a bifurcation of the program protocols, some of which are represented by functions and others by the values of messages.
  • It doesn't let you fully leverage the efficiencies of being in the same process. It is quite possible to efficiently directly share a large immutable data structure between threads, but the actor model forces intervening conversations and, potentially, copying. Reads and writes get serialized and block each other, etc.
  • It reduces your flexibility in modeling - this is a world in which everyone sits in a windowless room and communicates only by mail. Programs are decomposed as piles of blocking switch statements. You can only handle messages you anticipated receiving. Coordinating activities involving multiple actors is very difficult. You can't observe anything without its cooperation/coordination - making ad-hoc reporting or analysis impossible, instead forcing every actor to participate in each protocol.
  • It is often the case that taking something that works well locally and transparently distributing it doesn't work out - the conversation granularity is too chatty or the message payloads are too large or the failure modes change the optimal work partitioning, i.e. transparent distribution isn't transparent and the code has to change anyway.”
And then there is Fowler’s First Law of Distributed Object Design: Don’t distribute your objects! (unless you must)

Typesafe is shepherding a great number of people in the Scala community on how to best use the language and its platform. By pushing Reactive (which basically equals Akka) on its front page as the cure for all ills it suggests that you must be missing something if you’re not doing it this way. But the way that’s been outlined is most appropriate in a distributed computing setup. And many distributed computing problems arising out of large datasets, for example, can be addressed using other techniques such as MapReduce.

While the Akka platform is very powerful I believe that the Scala community could use better education from Typesafe on when it is best used as well as when to consider alternative architectures. Balance will go a long way toward credibility. Thanks so much for the work. But please bear in mind that if Akka is a hammer that does not imply that everything is a nail.

Joe Lynch





Comments

  1. I'm approaching Akka as a long-time .NET developer who gravitated to the Reactive Extensions for .NET after falling in love with LINQ. Recently I've been pulled toward Java 8/Scala. I have to say coming at this as an outsider, there is definitely something to the framework. It's not a perfect fit for every scenario, but it's an approachable solution for dealing with problems that many companies are either dealing with now or hope to deal with in the future (coping with explosive growth, scaling horizontally/geo-spatially, growing out of datacenters and into clouds).

    Some of these concerns, particularly geospatial distribution and cloud computing come with some added complexity and risks, which Akka is particularly adept at mitigating. Embracing the actors model gives you resiliency and elasticity virtually for free and developing applications in a message-oriented way gives you durability and failure recovery with little extra effort.

    As a web developer, you don't have to completely drink the Kool Aid to pick up the benefits. The Play framework is built on top of Akka and provides a simple and familiar framework for writing web applications and web services. It abstracts away some of the core Akka concepts and still gives you the ability to pay attention to them when the time comes. One more thought... Check out Akka Streams. The concepts are very similar to Rx in .NET but include the concept of back pressure. In large distributed systems, this is crucial for preventing one system from overpowering another.

    ReplyDelete
  2. While I applaud you for remaining critical, I cannot help but reply to your
    four statements and hopefully provide a more nuanced picture.

    > It is a much more complex programming model, requiring 2-message
    > conversations for the simplest data reads, and forcing the use of blocking
    > message receives, which introduce the potential for deadlock. Programming
    > for the failure modes of distribution means utilizing timeouts etc. It
    > causes a bifurcation of the program protocols, some of which are represented
    > by functions and others by the values of messages.

    This is a direct consequence of preparing your application to run in a
    distributed fashion. Instead of just calling a method that might live on some
    remote machine which might fail with exceptions your are explicit in how
    errors should be handled. This forces the developer to think of scenarios that
    can and *will* occur in distributed systems. The code can be made such that it
    handles these scenarios correctly.

    > It doesn't let you fully leverage the efficiencies of being in the same
    > process. It is quite possible to efficiently directly share a large
    > immutable data structure between threads, but the actor model forces
    > intervening conversations and, potentially, copying. Reads and writes get
    > serialized and block each other, etc.

    Actually, the abstraction of sending messages to another actor makes it
    possible for the actor system to pass messages to other actors efficiently
    when they are in the same JVM without losing any conciseness in the program.

    > It reduces your flexibility in modeling - this is a world in which everyone
    > sits in a windowless room and communicates only by mail. Programs are
    > decomposed as piles of blocking switch statements. You can only handle
    > messages you anticipated receiving. Coordinating activities involving
    > multiple actors is very difficult. You can't observe anything without its
    > cooperation/coordination - making ad-hoc reporting or analysis impossible,
    > instead forcing every actor to participate in each protocol.

    Ultimately every computer program is some windowless room where communication
    happens through certain set protocols, this will not change without the
    finding of real AI. While it is true that you can only handle messages that
    you expect, this also holds for traditional classes, programs, etc. It also
    doesn't prevent you from expecting "any" message and handle it according your
    business rules. Handling unknown messages or method calls is without knowing
    what to do with them is quite hard. It is the responsibility of the programmer
    to handle them either by ignoring them or performing some generic action.

    > It is often the case that taking something that works well locally and
    > transparently distributing it doesn't work out - the conversation
    > granularity is too chatty or the message payloads are too large or the
    > failure modes change the optimal work partitioning, i.e. transparent
    > distribution isn't transparent and the code has to change anyway.”

    This is entirely to be expected due to implicit requirements hard-coded in the
    program such as latency and bandwidth. The transport is still transparent,
    however, according to your deployment environment you might need to tweak the
    latency parameters. Setting a time-out of 5 milliseconds for expecting a
    response might be reasonable and obtainable in a local system but hard to
    obtain in a distributed datacenter. As always, one should take into account
    the production environment on which one deploys the application.

    ReplyDelete
    Replies
    1. Thanks for your comments! To be clear that entire section is a quote from Rich Hickey, the creator of Closure. It's making the point the argument that actors are overkill when your program is not distributed. You can see the larger context of the argument here: http://clojure.org/state

      Delete
  3. My take on this is: (1) Akka is a great tool for many problems, (2) Akka does not address all problems. Panacea for all ills simply does not exist and never will.

    Akka excels at dividing work (and mutable state) between worker entities; this is where the actor model shines. STMs excel in the situations where there is a need to combine work. In this sense, these two approaches are orthogonal and should not be perceived as competitors for the developers' attention. This is why I'm worried that ScalaSTM may be deprecated in Akka.

    In my opinion, Akka would greatly benefit from developing more cooperation patterns between actors. Supervision is one, but it's about the only one currently in use in Akka. Transactors used to be another, but they are deprecated--why?

    ReplyDelete

Post a Comment