About Akka Streams

Excellent post about Akka Streams — Akka Team

Really nice introduction to #Akka Streams! FEEL THE POWER! ^^ — Viktor Klang

A must-read on #Akka #Streams!!! — Ellan Vannin CA

In many computer programs, the whole logic (or a vast part of it) is essentially step-by-step processing of data. Of course, this includes the situation when we iterate over the data and just execute the processing logic on every piece of it. However, there are a couple of complications here:

  • the processing logic may be quite complex, with various aggregations, merging, routing, error recoveries, etc.;
  • we might want to execute steps asynchronously, for instance, to take advantage of multi-processor machines or use I/O;
  • asynchronous execution of data processing steps inherently involves buffering, queues, congestion, and other matter, which are really difficult to handle properly (please, read “Handling Overload” by Fred Hébert).

Therefore, sometimes it is a good idea to express this logic on a high level using some kind of a framework, without a need to implement (possibly complex) mechanics of asynchronous pipeline processing. This was one of the rationales behind frameworks like Apache Camel or Apache Storm.

Actor systems like Erlang or Akka are fairly good for building robust asynchronous data pipelines. However, they are quite low-level by themselves, so writing such pipelines might be tiresome. Newer versions of Akka include the possibility for doing pipeline processing on a quite high level, which is called Akka Streams. Akka Streams grows from Reactive Streams initiative. It implements a streaming interface on top of Akka actor system. In this post I would like to give a short introduction to this library.

You might be interested in an introductory post about Akka in this blog.

We will need a Scala project with two dependencies:

Akka Streams basics

In Akka Streams, we represent data processing in a form of data flow through an arbitrary complex graph of processing stages. Stages have zero or more inputs and zero or more outputs. The basic building blocks are Sources (one output), Sinks (one input) and Flows (one input and one output). Using them, we can build arbitrary long linear pipelines. An example of a stream with just a Source, one Flow and a Sink:
Continue reading

Time-based (version 1) UUIDs ordering in PostgreSQL

The problem

Not so long ago I wrote about a little strange problem with time-based UUIDs I faced (Retrospective time-based UUID generation (with Spark)). This time I needed to do something more usual. As you surely know, UUID standard has several versions. Version 4 is purely random numbers, version 1 relies on the identity of a machine and the timestamp with the counter, etc.

Let’s consider version 1 UUIDs. They include timestamps with counters, so they naturally can be ordered by it. In other words, having two time-based UUIDs, I wanted to say if the first is lower, greater or equal to the second. No big deal, say, in Java: u1.timestamp().compare(u2.timestamp()). But I wanted to do this in PostgreSQL, inside SQL query. Postgresql does have uuid data type, but it provides no functions for comparing them by version 1 timestamps. That is why I decided to write such a function myself.

Version 1 UUID structure

You can find the full UUID specification in RFC 4122. Let’s consider here only the part of version 1 which we are interested in. UUIDs have length of 128 bits, i.e. 16 bytes, which have different meaning in different versions of the standard. Version 1 layout is shown on the picture:

UUID version 1 structure

Continue reading

Type-safe query builders in Scala revisited: shapeless

Not so long ago, I wrote a post about creating type-safe query builders in Scala from scratch. In it, I also suggested using shapeless library to do what was described. Now, I decided to write how it could be done. The code is here.

Problem reminder

Without going into much details, the problem was to provide a type-safe way to build queries (to an abstract database, for instance) with parameters of different types. Something like

Continue reading

Retrospective time-based UUID generation (with Spark)

I have faced a problem: having about 50 Gb of data in one database, export records to another database with slight modifications, which include UUID generation based on timestamps of records (collisions were intolerable). We have Spark cluster, and with it the problem did not seem even a little tricky: create RDD, map it and send to the target DB.

(In this post I am telling about Spark. I have not told about it in this blog so far, but I will. Generally, it is a framework for large-scale data processing, like Hadoop. It operates on abstract distributed data collections called Resilient Distributed Datasets (RDDs). Their interface is quite similar to functional collections (map, flatMap, etc.), and also has some operations specific for Spark’s distributed and large-scale nature.)

However, I was too optimistic – there were difficulties.

Continue reading

Type-safe query builders in Scala

Recently, I was hacking on a Scala library for queries to Cassandra database, phantom. At the moment, it was not able to build prepared statements, which I needed, so I added this functionality. However, phantom developers also implemented prepared statements quickly :) Nevertheless, I decided to write this post about the core idea of my implementation.

Caution: Don’t do this at home, use shapeless :)

My plan was to be able to prepare queries like this:

and after that to execute this query: query.execute("string", 1, false). Moreover, I wanted this execution to be type-safe, so it was not possible to execute something like query.execute(1, "string", 0).

I will abstract from phantom queries and will focus on the general idea. The code is here.

Continue reading

The Bloom filter

In many software engineering problems, we have a set and need to determine if some value belongs to this set. If the possible maximum set cardinality (size; maximum size = total count of elements we consider) is small, the solution is straightforward: just store the set explicitly (for instance, in form of a RB-tree), update it when necessary and check if the set contains elements that we are interested in. But what if maximum set cardinality is large or we need many such sets to operate simultaneously? Or if the set membership test is an expensive operation?

Suppose we want to know if an element belongs to a set. We have decided that it is acceptable to get false positive answers (the answer is “yes”, but the element is not actually in the set) with probability p and not acceptable to get false negative (the answer is “no”, but the element in actually in the set). The data structure that could help us in this situation is called the Bloom filter.

A Bloom filter (proposed by Burton Howard Bloom in 1970) is a bit array of m bits (initially set to 0) and k different hash functions. Each hash function maps a value into a single integer number.

Look at this picture from Wikipedia:
Bloom filter

Continue reading

Delayed message delivery in RabbitMQ

UPD June 01, 2015: there is a plugin for this now.

A lot of developers use RabbitMQ message broker. It is quite mature but still lacks for some features that one may need. One of them is delayed message delivery: there is no way to send a message that will be delivered after a specified delay (it’s a limitation of AMQP protocol). Hopefully, there is a hack for this.

RabbitMQ logo

Let’s start from dead letters. A message can become “dead” by several reasons, such as rejection or TTL (time to live) expiration. RabbitMQ can deal with such messages by redirecting them to a particular exchange and routing key. We can use this ability to implement delayed delivery. We will create a special queue for holding delayed messages. This queue will not have any subscribers in order for messages to expire. After the expiration, messages will be passed to a destination exchange and routing key, just as planned.

Continue reading

Introduction to Akka

There are several models of concurrent computing, the actor model is one of them. I am going to give a glimpse of this model and one of its implementation – Akka toolkit.

Akka logo

The actor model

In the actor model, actors are objects that have state and behavior and communicate to each other by message passing. This sounds like good old objects from OOP, but the crucial difference is that message passing is one-way and asynchronous: an actor sends a message to another actor and continues its work. In fact, actors are totally reactive, all theirs activity is happening as reaction to incoming messages, which are processed one by one. However, it is not a limitation because messages can be of any sort including scheduled messages (by timer) and network messages.

Continue reading

Value Classes in Scala

Type systems and compile-time type checking are great things that can save you a couple of hours of debugging and also have documenting potential, could make the code more understandable. In my opinion, it’s wise to use them, and unfortunately, sometimes we don’t do this enough. Consider Integer/Int/int. A counter could be Integer, an entity identifier could be Integer, an integer number in arithmetic expression could be Integer. In most cases all this Integers have nothing to do with each other: in your domain it is a bad idea to compare them, do arithmetic operations on them, pass one instead of another as a function parameter etc.

In one of my projects (in C#) there are a dozen of domain entities that have integer identifiers that are passed all over the code. After a couple of bugs connected with mixed up identifiers of different entities I’ve solved this problem by replacing plain integer numbers with structs (in C#, structs are value types used for representing lightweight objects such as Point or Color) like Id<EntityName>T (T is to distinct type from property names). The key idea was to introduce a new level of types to let the type checker intently look at the code instead of me. It’s worked: I’ve gotten rid of some old bugs in rarely used parts of code and hope new bugs of such a type won’t bother me in the future. (Aside: I hope, this post will persuade you not only to consider using value classes but also to think about the role of types in code quality).

Continue reading

Why I like Scala

I am familiar (more or less) with a number of programming languages and have both emotional and rational thoughts of them. Scala is for certain in the group of languages I like. I have decided to summarize my judgments of Scala attractive parts in a blog post and here it is. Also, I have got some ideas of posts about Scala and its technology stack and an introduction is possibly needed.

Scala logo

Scala logo

Scala is a general purpose programming language created by Martin Odersky more than ten years ago. It compiles into JVM byte code and interoperable (both direction) with Java (including mixed compilation), which gives Scala an ability to use all this enormous amount of code created for JVM. The interesting property and also one of the strongest selling points of the language is fusion of object-oriented and functional programming paradigms.

Continue reading