Packages

p

monix

catnap

package catnap

Ordering
  1. Alphabetic
Visibility
  1. Public
  2. Protected

Package Members

  1. package cancelables

Type Members

  1. trait CancelableF[F[_]] extends AnyRef

    Represents a pure data structure that describes an effectful, idempotent action that can be used to cancel async computations, or to release resources.

    Represents a pure data structure that describes an effectful, idempotent action that can be used to cancel async computations, or to release resources.

    This is the pure, higher-kinded equivalent of monix.execution.Cancelable and can be used in combination with data types meant for managing effects, like Task, Coeval or cats.effect.IO.

    Note: the F suffix comes from this data type being abstracted over F[_].

  2. trait ChannelF[F[_], E, A] extends Serializable

    Channel is a communication channel that can be consumed via consume.

    Channel is a communication channel that can be consumed via consume.

    Examples:

  3. final class CircuitBreaker[F[_]] extends AnyRef

    The CircuitBreaker is used to provide stability and prevent cascading failures in distributed systems.

    The CircuitBreaker is used to provide stability and prevent cascading failures in distributed systems.

    Purpose

    As an example, we have a web application interacting with a remote third party web service. Let's say the third party has oversold their capacity and their database melts down under load. Assume that the database fails in such a way that it takes a very long time to hand back an error to the third party web service. This in turn makes calls fail after a long period of time. Back to our web application, the users have noticed that their form submissions take much longer seeming to hang. Well the users do what they know to do which is use the refresh button, adding more requests to their already running requests. This eventually causes the failure of the web application due to resource exhaustion. This will affect all users, even those who are not using functionality dependent on this third party web service.

    Introducing circuit breakers on the web service call would cause the requests to begin to fail-fast, letting the user know that something is wrong and that they need not refresh their request. This also confines the failure behavior to only those users that are using functionality dependent on the third party, other users are no longer affected as there is no resource exhaustion. Circuit breakers can also allow savvy developers to mark portions of the site that use the functionality unavailable, or perhaps show some cached content as appropriate while the breaker is open.

    How It Works

    The circuit breaker models a concurrent state machine that can be in any of these 3 states:

    1. Closed: During normal operations or when the CircuitBreaker starts
      • Exceptions increment the failures counter
      • Successes reset the failure count to zero
      • When the failures counter reaches the maxFailures count, the breaker is tripped into Open state
    2. Open: The circuit breaker rejects all tasks with an ExecutionRejectedException
      • all tasks fail fast with ExecutionRejectedException
      • after the configured resetTimeout, the circuit breaker enters a HalfOpen state, allowing one task to go through for testing the connection
    3. HalfOpen: The circuit breaker has already allowed a task to go through, as a reset attempt, in order to test the connection
      • The first task when Open has expired is allowed through without failing fast, just before the circuit breaker is evolved into the HalfOpen state
      • All tasks attempted in HalfOpen fail-fast with an exception just as in Open state
      • If that task attempt succeeds, the breaker is reset back to the Closed state, with the resetTimeout and the failures count also reset to initial values
      • If the first call fails, the breaker is tripped again into the Open state (the resetTimeout is multiplied by the exponential backoff factor)

    Usage

    import monix.catnap._
    import scala.concurrent.duration._
    
    // Using cats.effect.IO for this sample, but you can use any effect
    // type that integrates with Cats-Effect, including monix.eval.Task:
    import cats.effect.{Clock, IO}
    implicit val clock = Clock.create[IO]
    
    // Using the "unsafe" builder for didactic purposes, but prefer
    // the safe "apply" builder:
    val circuitBreaker = CircuitBreaker[IO].unsafe(
      maxFailures = 5,
      resetTimeout = 10.seconds
    )
    
    //...
    val problematic = IO {
      val nr = util.Random.nextInt()
      if (nr % 2 == 0) nr else
        throw new RuntimeException("dummy")
    }
    
    val task = circuitBreaker.protect(problematic)

    When attempting to close the circuit breaker and resume normal operations, we can also apply an exponential backoff for repeated failed attempts, like so:

    val exponential = CircuitBreaker[IO].of(
      maxFailures = 5,
      resetTimeout = 10.seconds,
      exponentialBackoffFactor = 2,
      maxResetTimeout = 10.minutes
    )

    In this sample we attempt to reconnect after 10 seconds, then after 20, 40 and so on, a delay that keeps increasing up to a configurable maximum of 10 minutes.

    Sync versus Async

    The CircuitBreaker works with both Sync and Async type class instances.

    If the F[_] type used implements Async, then the CircuitBreaker gains the ability to wait for it to be closed, via awaitClose.

    Retrying Tasks

    Generally it's best if tasks are retried with an exponential back-off strategy for async tasks.

    import cats.implicits._
    import cats.effect._
    import monix.execution.exceptions.ExecutionRejectedException
    
    def protectWithRetry[F[_], A](task: F[A], cb: CircuitBreaker[F], delay: FiniteDuration)
      (implicit F: Async[F], timer: Timer[F]): F[A] = {
    
      cb.protect(task).recoverWith {
        case _: ExecutionRejectedException =>
          // Sleep, then retry
          timer.sleep(delay).flatMap(_ => protectWithRetry(task, cb, delay * 2))
      }
    }

    But an alternative is to wait for the precise moment at which the CircuitBreaker is closed again and you can do so via the awaitClose method:

    def protectWithRetry2[F[_], A](task: F[A], cb: CircuitBreaker[F])
      (implicit F: Async[F]): F[A] = {
    
      cb.protect(task).recoverWith {
        case _: ExecutionRejectedException =>
          // Waiting for the CircuitBreaker to close, then retry
          cb.awaitClose.flatMap(_ => protectWithRetry2(task, cb))
      }
    }

    Be careful when doing this, plan carefully, because you might end up with the "thundering herd problem".

    Credits

    This Monix data type was inspired by the availability of Akka's Circuit Breaker.

  4. final class ConcurrentChannel[F[_], E, A] extends ProducerF[F, E, A] with ChannelF[F, E, A]

    ConcurrentChannel can be used to model complex producer-consumer communication channels.

    ConcurrentChannel can be used to model complex producer-consumer communication channels.

    It exposes these fundamental operations:

    • push for pushing single events to consumers (producer side)
    • pushMany for pushing event sequences to consumers (producer side)
    • halt for pushing the final completion event to all consumers (producer side)
    • consume for creating a ConsumerF value that can consume the incoming events from the channel

    Example

    import cats.implicits._
    import cats.effect._
    import monix.execution.Scheduler.global
    
    // For being able to do IO.start
    implicit val cs = SchedulerEffect.contextShift[IO](global)
    // We need a `Timer` for this to work
    implicit val timer = SchedulerEffect.timer[IO](global)
    
    // Completion event
    sealed trait Complete
    object Complete extends Complete
    
    def logLines(consumer: ConsumerF[IO, Complete, String], index: Int): IO[Unit] =
      consumer.pull.flatMap {
        case Right(message) =>
          IO(println("Worker $$index: $$message"))
            // continue loop
            .flatMap(_ => logLines(consumer, index))
        case Left(Complete) =>
          IO(println("Worker $$index is done!"))
      }
    
    for {
      channel <- ConcurrentChannel[IO].of[Complete, String]
      // Workers 1 & 2, sharing the load between them
      task_1_2 = channel.consume.use { ref =>
        (logLines(ref, 1), logLines(ref, 2)).parSequence_
      }
      consumers_1_2 <- task_1_2.start // fiber
      // Workers 3 & 4, receiving the same events as workers 1 & 2,
      // but sharing the load between them
      task_3_4 = channel.consume.use { ref =>
        (logLines(ref, 3), logLines(ref, 4)).parSequence_
      }
      consumers_3_4 <- task_3_4.start // fiber
      // Pushing some samples
      _ <- channel.push("Hello, ")
      _ <- channel.push("World!")
      // Signal there are no more events
      _ <- channel.halt(Complete)
      // Await for the completion of the consumers
      _ <- consumers_1_2.join
      _ <- consumers_3_4.join
    } yield ()

    Unicasting vs Broadcasting vs Multicasting

    Unicasting: A communication channel between one producer and one ConsumerF. Multiple workers can share the load of processing incoming events. For example in case we want to have 8 workers running in parallel, you can create one ConsumerF, via consume and then use it for multiple workers. Internally this setup uses a single queue and whatever workers you have will share it.

    Broadcasting: the same events can be sent to multiple consumers, thus duplicating the load, as a broadcasting setup can be created by creating and consuming from multiple ConsumerF via multiple calls to consume. Internally each ConsumerF gets its own queue and hence messages are duplicated.

    Multicasting: multiple producers can push events at the same time, provided the channel's type is configured as a MultiProducer.

    Back-Pressuring and the Polling Model

    When consumers get created via consume, a buffer gets created and assigned per consumer.

    Depending on what the BufferCapacity is configured to be, the initialized consumer can work with a maximum buffer size, a size that could be rounded to a power of 2, so you can't rely on it to be precise. See consumeWithConfig for customizing this buffer on a per-consumer basis, or the ConcurrentChannel.withConfig builder for setting the default used in consume.

    On push, when the queue is full, the implementation back-pressures until the channel has room again in its internal buffer(s), the task being completed when the value was pushed successfully. Similarly ConsumerF.pull (returned by consume) awaits the channel to have items in it. This works for both bounded and unbounded channels.

    For both push and pull, in case awaiting a result happens, the implementation does so asynchronously, without any threads being blocked.

    Multi-threading Scenario

    This channel supports the fine-tuning of the concurrency scenario via ChannelType.ProducerSide (see ConcurrentChannel.withConfig) and the ChannelType.ConsumerSide that can be specified per consumer (see consumeWithConfig).

    The default is set to MultiProducer and MultiConsumer, which is always the safe choice, however these can be customized for better performance.

    These scenarios are available:

    • MPMC: multi-producer, multi-consumer, when MultiProducer is selected on the channel's creation and MultiConsumer is selected when consuming; this is the safe scenario and should be used as the default, especially when in doubt
    • MPSC: multi-producer, single-consumer, when MultiProducer is selected on the channel's creation and SingleConsumer is selected when consuming; this scenario should be selected when there are multiple producers, but a single worker that consumes data sequentially (per ConsumerF); note that this means a single worker per ConsumerF instance, but you can still have multiple ConsumerF instances created, , because each ConsumerF gets its own buffer anyway
    • SPMC: single-producer, multi-consumer, when SingleProducer is selected on the channel's creation and MultiConsumer is selected when consuming; this scenario should be selected when there are multiple workers processing data in parallel (e.g. pulling from the same ConsumerF), but a single producer that pushes data on the channel sequentially
    • SPSC: single-producer, single-consumer, when SingleProducer is selected on the channel's creation and SingleConsumer is selected when consuming; this scenario should be selected when there is a single producer that pushes data on the channel sequentially and a single worker per ConsumerF instance that pulls data from the channel sequentially; note you can still have multiple ConsumerF instances running in parallel, because each ConsumerF gets its own buffer anyway

    The default is MPMC, because that's the safest scenario.

    import cats.implicits._
    import cats.effect.IO
    import monix.execution.ChannelType.{SingleProducer, SingleConsumer}
    import monix.execution.BufferCapacity.Bounded
    
    val channel = ConcurrentChannel[IO].withConfig[Int, Int](
      producerType = SingleProducer
    )
    
    val consumerConfig = ConsumerF.Config(
      consumerType = Some(SingleConsumer)
    )
    
    for {
      producer  <- channel
      consumer1 =  producer.consumeWithConfig(consumerConfig)
      consumer2 =  producer.consumeWithConfig(consumerConfig)
      fiber1    <- consumer1.use { ref => ref.pull }.start
      fiber2    <- consumer2.use { ref => ref.pull }.start
      _         <- producer.push(1)
      value1    <- fiber1.join
      value2    <- fiber2.join
    } yield {
      (value1, value2)
    }

    Note that in this example, even if we used SingleConsumer as the type passed in consumeWithConfig, we can still consume from two ConsumerF instances at the same time, because each one gets its own internal buffer. But you cannot have multiple workers per ConsumerF in this scenario, because this would break the internal synchronization / visibility guarantees.

    WARNING: default is MPMC, however any other scenario implies a relaxation of the internal synchronization between threads.

    This means that using the wrong scenario can lead to severe concurrency bugs. If you're not sure what multi-threading scenario you have, then just stick with the default MPMC.

    Credits

    Inspired by Haskell's Control.Concurrent.ConcurrentChannel, but note that this isn't a straight port — e.g. the Monix implementation has a cleaner, non-leaky interface, is back-pressured and allows for termination (via halt), which changes its semantics significantly.

  5. final class ConcurrentQueue[F[_], A] extends Serializable

    A high-performance, back-pressured, generic concurrent queue implementation.

    A high-performance, back-pressured, generic concurrent queue implementation.

    This is the pure and generic version of monix.execution.AsyncQueue.

    Example

    import cats.implicits._
    import cats.effect._
    import monix.execution.Scheduler.global
    
    // For being able to do IO.start
    implicit val cs = SchedulerEffect.contextShift[IO](global)
    // We need a `Timer` for this to work
    implicit val timer = SchedulerEffect.timer[IO](global)
    
    def consumer(queue: ConcurrentQueue[IO, Int], index: Int): IO[Unit] =
      queue.poll.flatMap { a =>
        println(s"Worker $$index: $$a")
        consumer(queue, index)
      }
    
    for {
      queue     <- ConcurrentQueue[IO].bounded[Int](capacity = 32)
      consumer1 <- consumer(queue, 1).start
      consumer2 <- consumer(queue, 1).start
      // Pushing some samples
      _         <- queue.offer(1)
      _         <- queue.offer(2)
      _         <- queue.offer(3)
      // Stopping the consumer loops
      _         <- consumer1.cancel
      _         <- consumer2.cancel
    } yield ()

    Back-Pressuring and the Polling Model

    The initialized queue can be limited to a maximum buffer size, a size that could be rounded to a power of 2, so you can't rely on it to be precise. Such a bounded queue can be initialized via ConcurrentQueue.bounded. Also see BufferCapacity, the configuration parameter that can be passed in the ConcurrentQueue.withConfig builder.

    On offer, when the queue is full, the implementation back-pressures until the queue has room again in its internal buffer, the task being completed when the value was pushed successfully. Similarly poll awaits the queue to have items in it. This works for both bounded and unbounded queues.

    For both offer and poll, in case awaiting a result happens, the implementation does so asynchronously, without any threads being blocked.

    Multi-threading Scenario

    This queue supports a ChannelType configuration, for fine tuning depending on the needed multi-threading scenario. And this can yield better performance:

    • MPMC: multi-producer, multi-consumer
    • MPSC: multi-producer, single-consumer
    • SPMC: single-producer, multi-consumer
    • SPSC: single-producer, single-consumer

    The default is MPMC, because that's the safest scenario.

    import monix.execution.ChannelType.MPSC
    import monix.execution.BufferCapacity.Bounded
    
    val queue = ConcurrentQueue[IO].withConfig[Int](
      capacity = Bounded(128),
      channelType = MPSC
    )

    WARNING: default is MPMC, however any other scenario implies a relaxation of the internal synchronization between threads.

    This means that using the wrong scenario can lead to severe concurrency bugs. If you're not sure what multi-threading scenario you have, then just stick with the default MPMC.

  6. trait ConsumerF[F[_], E, A] extends Serializable

    A simple interface that models the consumer side of a producer-consumer communication channel.

    A simple interface that models the consumer side of a producer-consumer communication channel.

    Currently exposed by ConcurrentChannel.consume.

    F

    is effect type used for processing tasks asynchronously

    E

    is the type for the completion event

    A

    is the type for the stream of events being consumed

  7. trait FutureLift[F[_], Future[_]] extends ~>[[A]F[Future[A]], F]

    A type class for conversions from scala.concurrent.Future or other Future-like data type (e.g.

    A type class for conversions from scala.concurrent.Future or other Future-like data type (e.g. Java's CompletableFuture).

    N.B. to use its syntax, you can import monix.catnap.syntax:

    import monix.catnap.syntax._
    import scala.concurrent.Future
    // Used here only for Future.apply as the ExecutionContext
    import monix.execution.Scheduler.Implicits.global
    // Can use any data type implementing Async or Concurrent
    import cats.effect.IO
    
    val io = IO(Future(1 + 1)).futureLift

    IO provides its own IO.fromFuture of course, however FutureLift is generic and works with CancelableFuture as well.

    import monix.execution.{CancelableFuture, Scheduler, FutureUtils}
    import scala.concurrent.Promise
    import scala.concurrent.duration._
    import scala.util.Try
    
    def delayed[A](event: => A)(implicit s: Scheduler): CancelableFuture[A] = {
      val p = Promise[A]()
      val c = s.scheduleOnce(1.second) { p.complete(Try(event)) }
      CancelableFuture(p.future, c)
    }
    
    // The result will be cancelable:
    val sum: IO[Int] = IO(delayed(1 + 1)).futureLift
  8. final class MVar[F[_], A] extends cats.effect.concurrent.MVar[F, A]

    A mutable location, that is either empty or contains a value of type A.

    A mutable location, that is either empty or contains a value of type A.

    It has the following fundamental atomic operations:

    • put which fills the var if empty, or blocks (asynchronously) until the var is empty again
    • tryPut which fills the var if empty. returns true if successful
    • take which empties the var if full, returning the contained value, or blocks (asynchronously) otherwise until there is a value to pull
    • tryTake empties if full, returns None if empty.
    • read which reads the current value without touching it, assuming there is one, or otherwise it waits until a value is made available via put
    • tryRead returns Some(a) if full, without modifying the var, or else returns None
    • isEmpty returns true if currently empty

    The MVar is appropriate for building synchronization primitives and performing simple inter-thread communications. If it helps, it's similar with a BlockingQueue(capacity = 1), except that it is pure and that doesn't block any threads, all waiting being done asynchronously.

    Given its asynchronous, non-blocking nature, it can be used on top of Javascript as well.

    N.B. this is a reimplementation of the interface exposed in Cats-Effect, see: cats.effect.concurrent.MVar

    Inspired by Control.Concurrent.MVar from Haskell.

  9. sealed trait OrElse[+A, +B] extends AnyRef

    A type class for prioritized implicit search.

    A type class for prioritized implicit search.

    Useful for specifying type class instance alternatives. Examples:

    • Async[F] OrElse Sync[F]
    • Concurrent[F] OrElse Async[F]

    Inspired by the implementations in Shapeless and Algebra.

  10. trait ProducerF[F[_], E, A] extends Serializable

    A simple interface that models the producer side of a producer-consumer communication channel.

    A simple interface that models the producer side of a producer-consumer communication channel.

    In a producer-consumer communication channel we've got these concerns to take care of:

    • back-pressure, which is handled automatically via this interface
    • halting the channel with a final event and informing all current and future consumers about it, while stopping future producers from pushing more events

    The ProducerF interface takes care of these concerns via:

    • the F[Boolean] result, which should return true for as long as the channel wasn't halted, so further events can be pushed; these tasks also block (asynchronously) when internal buffers are full, so back-pressure concerns are handled automatically
    • halt, being able to close the channel with a final event that will be visible to all current and future consumers

    Currently implemented by ConcurrentChannel.

  11. final class Semaphore[F[_]] extends cats.effect.concurrent.Semaphore[F]

    The Semaphore is an asynchronous semaphore implementation that limits the parallelism on task execution.

    The Semaphore is an asynchronous semaphore implementation that limits the parallelism on task execution.

    The following example instantiates a semaphore with a maximum parallelism of 10:

    import cats.implicits._
    import cats.effect.IO
    
    // Needed for ContextShift[IO]
    import monix.execution.Scheduler
    implicit val cs = IO.contextShift(Scheduler.global)
    
    // Dummies for didactic purposes
    case class HttpRequest()
    case class HttpResponse()
    def makeRequest(r: HttpRequest): IO[HttpResponse] = IO(???)
    
    for {
      semaphore <- Semaphore[IO](provisioned = 10)
      tasks = for (_ <- 0 until 1000) yield {
        semaphore.withPermit(makeRequest(???))
      }
      // Execute in parallel; note that due to the `semaphore`
      // no more than 10 tasks will be allowed to execute in parallel
      _ <- tasks.toList.parSequence
    } yield ()

    Credits

    Semaphore is now implementing cats.effect.Semaphore, deprecating the old Monix TaskSemaphore.

    The changes to the interface and some implementation details are inspired by the implementation in Cats-Effect, which was ported from FS2.

Value Members

  1. object CancelableF
  2. object CircuitBreaker extends CircuitBreakerDocs
  3. object ConcurrentChannel extends Serializable

  4. object ConcurrentQueue extends Serializable

  5. object ConsumerF extends Serializable
  6. object FutureLift extends FutureLiftForPlatform with Serializable
  7. object MVar
  8. object OrElse extends OrElse0
  9. object SchedulerEffect
  10. object Semaphore
  11. object syntax

Ungrouped