mirai - Minimalist Async Evaluation Framework for R

Table of Contents

  1. Example 1: Compute-intensive Operations
  2. Example 2: I/O-bound Operations
  3. Example 3: Resilient Pipelines
  4. Daemons: Local Persistent Processes
  5. Distributed Computing: Remote Daemons
  6. Distributed Computing: Launching Daemons
  7. Distributed Computing: TLS Secure Connections
  8. Compute Profiles
  9. Errors, Interrupts and Timeouts
  10. Serialization - Arrow, polars and beyond
  11. Map Functions

Example 1: Compute-intensive Operations

Use case: minimise execution times by performing long-running tasks concurrently in separate processes.

Multiple long computes (model fits etc.) can be performed in parallel on available computing cores.

Use mirai() to evaluate an expression asynchronously in a separate, clean R process.

A ‘mirai’ object is returned immediately.

library(mirai)

input <- list(x = 2, y = 5, z = double(1e8))

m <- mirai(
  {
    res <- rnorm(1e8, mean = mean, sd = sd)
    max(res) - min(res)
  },
  mean = input$x,
  sd = input$y
)

Above, all name = value pairs are passed through to the mirai via the ... argument.

Whilst the async operation is ongoing, attempting to access the data yields an ‘unresolved’ logical NA.

m
#> < mirai [] >
m$data
#> 'unresolved' logi NA

To check whether a mirai has resolved:

unresolved(m)
#> [1] TRUE

To wait for and collect the evaluated result, use the mirai’s [] method:

m[]
#> [1] 59.85764

It is not necessary to wait, as the mirai resolves automatically whenever the async operation completes, the evaluated result then available at $data.

m
#> < mirai [$data] >
m$data
#> [1] 59.85764

For easy programmatic use of mirai(), ‘.expr’ accepts a pre-constructed language object, and also a list of named arguments passed via ‘.args’. So, the following would be equivalent to the above:

expr <- quote({
  res <- rnorm(1e8, mean = mean, sd = sd)
  max(res) - min(res)
})

args <- list(mean = input$x, sd = input$y)

m <- mirai(.expr = expr, .args = args)

m[]
#> [1] 57.34813

« Back to ToC

Example 2: I/O-bound Operations

Use case: ensure execution flow of the main process is not blocked.

High-frequency real-time data cannot be written to file/database synchronously without disrupting the execution flow.

Cache data in memory and use mirai() to perform periodic write operations concurrently in a separate process.

Below, ‘.args’ is used to pass environment(), which is the calling environment. This provides a convenient method of passing in existing objects.

library(mirai)

x <- rnorm(1e6)
file <- tempfile()

m <- mirai(write.csv(x, file = file), .args = environment())

A ‘mirai’ object is returned immediately.

unresolved() may be used in control flow statements to perform actions which depend on resolution of the ‘mirai’, both before and after.

This means there is no need to actually wait (block) for a ‘mirai’ to resolve, as the example below demonstrates.

# unresolved() queries for resolution itself so no need to use it again within the while loop
while (unresolved(m)) {
  cat("while unresolved\n")
  Sys.sleep(0.5)
}
#> while unresolved
#> while unresolved
cat("Write complete:", is.null(m$data))
#> Write complete: TRUE

Now actions which depend on the resolution may be processed, for example the next write.

« Back to ToC

Example 3: Resilient Pipelines

Use case: isolating code that can potentially fail in a separate process to ensure continued uptime.

As part of a data science / machine learning pipeline, iterations of model training may periodically fail for stochastic and uncontrollable reasons (e.g. buggy memory management on graphics cards).

Running each iteration in a ‘mirai’ isolates this potentially-problematic code such that even if it does fail, it does not bring down the entire pipeline.

library(mirai)

run_iteration <- function(i) {

  if (runif(1) < 0.1) stop("random error\n", call. = FALSE) # simulates a stochastic error rate
  sprintf("iteration %d successful\n", i)

}

for (i in 1:10) {

  m <- mirai(run_iteration(i), environment())
  while (is_error_value(call_mirai(m)$data)) {
    cat(m$data)
    m <- mirai(run_iteration(i), environment())
  }
  cat(m$data)

}
#> iteration 1 successful
#> iteration 2 successful
#> iteration 3 successful
#> iteration 4 successful
#> iteration 5 successful
#> Error: random error
#> iteration 6 successful
#> iteration 7 successful
#> iteration 8 successful
#> iteration 9 successful
#> iteration 10 successful

Further, by testing the return value of each ‘mirai’ for errors, error-handling code is then able to automate recovery and re-attempts, as in the above example. Further details on error handling can be found in the section below.

The end result is a resilient and fault-tolerant pipeline that minimises downtime by eliminating interruptions of long computes.

« Back to ToC

Daemons: Local Persistent Processes

Daemons, or persistent background processes, may be set to receive ‘mirai’ requests.

This is potentially more efficient as new processes no longer need to be created on an ad hoc basis.

With Dispatcher (default)

Call daemons() specifying the number of daemons to launch.

daemons(6)
#> [1] 6

To view the current status, status() provides the number of active connections along with a matrix of statistics for each daemon.

status()
#> $connections
#> [1] 1
#> 
#> $daemons
#>                                     i online instance assigned complete
#> abstract://24edc24599955d9fb2cf7866 1      1        1        0        0
#> abstract://b7b0bacdeccb8d509b30d312 2      1        1        0        0
#> abstract://c26bf4a605b568fef08b8dc1 3      1        1        0        0
#> abstract://5cd2c38a0b27ea9e9f1e995a 4      1        1        0        0
#> abstract://0def82c317f5c939ec8d2691 5      1        1        0        0
#> abstract://663ef29b41c12d730b2531a3 6      1        1        0        0

The default dispatcher = TRUE creates a dispatcher() background process that connects to individual daemon processes on the local machine. This ensures that tasks are dispatched efficiently on a first-in first-out (FIFO) basis to daemons for processing. Tasks are queued at the dispatcher and sent to a daemon as soon as it can accept the task for immediate execution.

Dispatcher uses synchronisation primitives from nanonext, waiting upon rather than polling for tasks, which is efficient both in terms of consuming no resources while waiting, and also being fully synchronised with events (having no latency).

daemons(0)
#> [1] 0

Set the number of daemons to zero to reset. This reverts to the default of creating a new background process for each ‘mirai’ request.

Without Dispatcher

Alternatively, specifying dispatcher = FALSE, the background daemons connect directly to the host process.

daemons(6, dispatcher = FALSE)
#> [1] 6

Requesting the status now shows 6 connections, along with the host URL at $daemons.

status()
#> $connections
#> [1] 6
#> 
#> $daemons
#> [1] "abstract://4f393f0fd908c8a397f80571"

This implementation sends tasks immediately, and ensures that tasks are evenly-distributed amongst daemons. This means that optimal scheduling is not guaranteed as the duration of tasks cannot be known a priori. As an example, tasks could be queued at a daemon behind a long-running task, whilst other daemons are idle having already completed their tasks.

The advantage of this approach is that it is low-level and does not require an additional dispatcher process. It is well-suited to working with similar-length tasks, or where the number of concurrent tasks typically does not exceed available daemons.

Everywhere

everywhere() may be used to evaluate an expression on all connected daemons and persist the resultant state, regardless of a daemon’s ‘cleanup’ setting.

everywhere(library(DBI))

The above keeps the DBI package loaded for all evaluations. Other types of setup task may also be performed, including making a common resource available, such as a database connection:

file <- tempfile()
everywhere(con <<- dbConnect(RSQLite::SQLite(), file), file = file)

By super-assignment, the conenction ‘con’ will be available in the global environment of all daemon instances. Subsequent mirai calls may then make use of ‘con’.

m <- mirai(capture.output(str(con)))
m[]
#> [1] "Formal class 'SQLiteConnection' [package \"RSQLite\"] with 8 slots" 
#> [2] "  ..@ ptr                :<externalptr> "                           
#> [3] "  ..@ dbname             : chr \"/tmp/RtmpVDuADO/fileede16af840e5\""
#> [4] "  ..@ loadable.extensions: logi TRUE"                               
#> [5] "  ..@ flags              : int 70"                                  
#> [6] "  ..@ vfs                : chr \"\""                                
#> [7] "  ..@ ref                :<environment: 0x5d16971e6f68> "           
#> [8] "  ..@ bigint             : chr \"integer64\""                       
#> [9] "  ..@ extended_types     : logi FALSE"

Disconnect from the database everywhere, and set the number of daemons to zero to reset.

everywhere(dbDisconnect(con))
daemons(0)
#> [1] 0

With Method

daemons() has a with() method, which evaluates an expression with daemons created for the duration of the expression and automatically torn down upon completion. It was designed for the use case of running a Shiny app with the desired number of daemons.

with(daemons(4), shiny::runApp(app))

Note: in the above case, it is assumed the app is already created. Wrapping a call to shiny::shinyApp() would not work as runApp() is implicitly called when the app is printed, however printing occurs only after with() has returned, hence the app would run outside of the scope of the with() statement.

« Back to ToC

Distributed Computing: Remote Daemons

The daemons interface may also be used to send tasks for computation to remote daemon processes on the network.

Call daemons() specifying ‘url’ as a character string such as: ‘tcp://10.75.32.70:5555’ at which daemon processes should connect to. Alternatively, use host_url() to automatically construct a valid URL.

IPv6 addresses are also supported and must be enclosed in square brackets [] to avoid confusion with the final colon separating the port. For example, port 5555 on the IPv6 address ::ffff:a6f:50d would be specified as tcp://[::ffff:a6f:50d]:5555.

For options on actually launching the daemons, please see the next section.

Connecting to Remote Daemons Through Dispatcher

The default dispatcher = TRUE creates a background dispatcher() process on the local machine, which listens to a vector of URLs that remote daemon() processes dial in to, with each daemon having its own unique URL.

It is recommended to use a websocket URL starting ws:// instead of TCP in this scenario (used interchangeably with tcp://). A websocket URL supports a path after the port number, which can be made unique for each daemon. In this way a dispatcher can connect to an arbitrary number of daemons over a single port.

Supplying a vector of URLs allows the use of arbitrary port numbers / paths. ‘n’ does not need to be specified if it can be inferred from the length of the ‘url’ vector, for example:

daemons(url = c("ws://10.75.32.70:5566/cpu", "ws://10.75.32.70:5566/gpu", "ws://10.75.32.70:7788/1"))

Alternatively, below a single URL is supplied, along with n = 4 to specify that the dispatcher should listen at 4 URLs. In such a case, an integer sequence is automatically appended to the path /1 through /4 to produce the URLs.

daemons(n = 4, url = host_url(port = 5555))
#> [1] 4

Requesting status on the host machine:

status()
#> $connections
#> [1] 1
#> 
#> $daemons
#>                     i online instance assigned complete
#> tcp://hostname:5555 1      0        0        0        0
#> tcp://hostname:5556 2      0        0        0        0
#> tcp://hostname:5557 3      0        0        0        0
#> tcp://hostname:5558 4      0        0        0        0

As per the local case, $connections shows the single connection to dispatcher, however $daemons now provides a matrix of statistics for the remote daemons.

Dispatcher automatically adjusts to the number of daemons actually connected. Hence it is possible to dynamically scale up or down the number of daemons according to requirements (limited to the ‘n’ URLs assigned).

To reset all connections and revert to default behaviour:

daemons(0)
#> [1] 0

Closing the connection causes the dispatcher to exit automatically, and in turn all connected daemons when their respective connections with the dispatcher are terminated.

Connecting to Remote Daemons Directly

By specifying dispatcher = FALSE, remote daemons connect directly to the host process. The host listens at a single URL, and distributes tasks to all connected daemons.

daemons(url = host_url(), dispatcher = FALSE)
#> [1] "tcp://hostname:33167"

Note that above, calling host_url() without a port value uses the default of ‘0’. This is a wildcard value that will automatically cause a free ephemeral port to be assigned. The actual assigned port is provided in the return value of the call, or it may be queried at any time via status().

The number of daemons connecting to the host URL is not limited and network resources may be added or removed at any time, with tasks automatically distributed to all connected daemons.

$connections will show the actual number of connected daemons.

status()
#> $connections
#> [1] 0
#> 
#> $daemons
#> [1] "tcp://hostname:33167"

To reset all connections and revert to default behaviour:

daemons(0)
#> [1] 0

This causes all connected daemons to exit automatically.

« Back to ToC

Distributed Computing: Launching Daemons

To launch remote daemons, supply a remote launch configuration to the ‘remote’ argument of daemons() when setting up daemons, or launch_remote() at any time afterwards.

ssh_config() may be used to generate a remote launch configuration if there is SSH access to the remote machine, or else remote_config() provides a flexible method for generating a configuration involving a custom resource manager / application.

SSH Direct Connection

The first example below launches 4 daemons on the machine 10.75.32.90 (using the default SSH port of 22 as this was not specified), connecting back to the dispatcher URLs:

daemons(
  n = 4,
  url = host_url(ws = TRUE, port = 5555),
  remote = ssh_config(remotes = "ssh://10.75.32.90")
)

The second example below launches one daemon on each of 10.75.32.90 and 10.75.32.91 using the custom SSH port of 222:

daemons(
  n = 2,
  url = host_url(ws = TRUE, port = 5555),
  remote = ssh_config(c("ssh://10.75.32.90:222", "ssh://10.75.32.91:222"))
)

In the above examples, as the remote daemons connect back directly, port 5555 on the local machine must be open to incoming connections from the remote addresses.

SSH Tunnelling

Use of SSH tunnelling provides a convenient way to launch remote daemons without requiring the remote machine to be able to access the host. Often firewall configurations or security policies may prevent opening a port to accept outside connections.

In these cases SSH tunnelling offers a solution by creating a tunnel once the initial SSH connection is made. For simplicity, this SSH tunnelling implementation uses the same port on both the side of the host and that of the corresponding node. SSH key-based authentication must also already be in place.

Tunnelling requires the hostname for ‘url’ specified when setting up daemons to be either ‘127.0.0.1’ or ‘localhost’. This is as the tunnel is created between 127.0.0.1:port or equivalently localhost:port on each machine. The host listens to its localhost:port and the remotes each dial into localhost:port on their own respective machines.

The below example launches 2 nodes on the remote machine 10.75.32.90 using SSH tunnelling over port 5555 (‘url’ hostname is specified as ‘localhost’):

daemons(
  url = "tcp://localhost:5555",
  remote = ssh_config(
    remotes = c("ssh://10.75.32.90", "ssh://10.75.32.90"),
    tunnel = TRUE
  )
)

Manual Deployment

As an alternative to automated launches, calling launch_remote() without specifying ‘remote’ may be used to return the shell commands for deploying daemons manually. The printed return values may be copy / pasted directly to a remote machine.

daemons(n = 2, url = host_url())
#> [1] 2
launch_remote(1:2)
#> [1]
#> Rscript -e "mirai::daemon('tcp://hostname:42711',rs=c(10407,1546822253,154704458,-596422845,-1298145368,134848905,-1905045578))"
#> 
#> [2]
#> Rscript -e "mirai::daemon('tcp://hostname:37759',rs=c(10407,-1874596641,-1360973041,-1204823043,-1637009859,754365804,-1469761153))"
daemons(0)
#> [1] 0

Note that daemons() should be set up on the host machine before launching daemon() on remote resources, otherwise the daemon instances will exit if a connection is not immediately available. Alternatively, specifying the argument autoexit = FALSE will allow daemons to wait (indefinitely) for a connection to become available.

« Back to ToC

Distributed Computing: TLS Secure Connections

TLS is available as an option to secure communications from the local machine to remote daemons.

Zero-configuration

An automatic zero-configuration default is implemented. Simply specify a secure URL of the form wss:// or tls+tcp:// when setting daemons, or use host_url(tls = TRUE), for example:

daemons(n = 4, url = host_url(ws = TRUE, tls = TRUE))
#> [1] 4

Single-use keys and certificates are automatically generated and configured, without requiring any further intervention. The private key is always retained on the host machine and never transmitted.

The generated self-signed certificate is available via launch_remote(). This function conveniently constructs the full shell command to launch a daemon, including the correctly specified ‘tls’ argument to daemon().

launch_remote(1)
#> [1]
#> Rscript -e "mirai::daemon('wss://hostname:34985/1',tls=c('-----BEGIN CERTIFICATE-----
#> MIIFNzCCAx+gAwIBAgIBATANBgkqhkiG9w0BAQsFADAzMREwDwYDVQQDDAhrdW1h
#> bW90bzERMA8GA1UECgwITmFub25leHQxCzAJBgNVBAYTAkpQMB4XDTAxMDEwMTAw
#> MDAwMFoXDTMwMTIzMTIzNTk1OVowMzERMA8GA1UEAwwIa3VtYW1vdG8xETAPBgNV
#> BAoMCE5hbm9uZXh0MQswCQYDVQQGEwJKUDCCAiIwDQYJKoZIhvcNAQEBBQADggIP
#> ADCCAgoCggIBAL/buY03ekZ/vO3uRSE1qRbnkEqiEjDMpAAv42Bg6eiftvldU9N1
#> WcP4vpHsRZI5yH/BZs/o580zVANDSB+JzNkvyN2usWwCRqzuoPlQm9XwimVyHkVj
#> O45wia6TPsS9dskaI9737CO5tpcHHxHbCKYEgmO6la8wSg6mUTRyVSh880ir6OUz
#> YJAsy64a135A4pejYK7kVlt+jXePfggKix2NUNfnBj67Q+s6npkYlK75VZL8qQ7Q
#> 83c0fohQKZxaw7RRE5zZnZr4hynLZtTvDQhx1VftGgi3dj5XavIKCpRzgL+GDwVU
#> 93XFQMw1F/EO/wshQJUkjWfDsnuc6yStc9bwjbUMtK/u/Hx4O8vzkniXttHJVoSH
#> xbDdvkruJnPRViuBiDXl8J/6bgwCYScKhXWX3UmqzJUSME6tpiY8LWtwp2h3iORB
#> BCa+Za0XN+4akLWIBE3Is/OYRIr5akjec+RuRs15iO5dxmHcV18bhvP3vNrmvMpR
#> y4ZdYRikleHf15mWfx31+gX/XSrn/bmLjdOzKZTECIHlWDqLPKJukLhC+23jMHG9
#> VV+/53T7rb4jUIi9NIxzDgFpddf1LXICBZLLt9uyb6FYiGFLCWhwz798xzjW04O5
#> G0DsAxxJT5nWJxlawZcdR7V19fMoB23YVe80QJFjcsDglOsCQGIEObgbAgMBAAGj
#> VjBUMBIGA1UdEwEB/wQIMAYBAf8CAQAwHQYDVR0OBBYEFEhACP0R2fmYKbewvWLP
#> bh+X/ZZvMB8GA1UdIwQYMBaAFEhACP0R2fmYKbewvWLPbh+X/ZZvMA0GCSqGSIb3
#> DQEBCwUAA4ICAQAMdoKIDFwd+5erBk6WikcOhY/WkPaWhrO6mcYXPvDp0x/rj1la
#> gNfdhMgVTJs2QviUNWZBmnD5H7doLOKT+80+Bv6DUpUh3apvSSsA+3sshTnJ1mE0
#> HcZlAg54djszN8xlofhzKOF9x61K52KIv6XkHDKG7T5EjDqaemT62L6f4PIUbq31
#> Avy28RezqF5pa0+1OMgiIL5K+Lmm5oSfJuwSb5T3whyyZF7JgW6IBa6AdtCSKl+h
#> /T7ve/v6fgdJmiJn8mcFsGHPIyJALG0KWZ0nOT0pMTEIatAi7qf0ULZhNDMHk9w/
#> WIm7TquOfUIay6MMTRXMmWE454zCDTki/Zkl08pihx+VGkOLvUtLOnfo1guEqZL4
#> A2RJjzw1EVgfzSW42QXMdOqgob/HTA2RXcbwpoI8yuXsLnaX0dH75KXGi2E0aISV
#> sGGDKCGsHhiUGz/i23l2oFsVee+qTXCREN3oK8iCTTNGXqKft8m2PV0dhqxpFGMo
#> bKmWDuYnq2z/odUysMR3/DIVpCpHgktzLWqxyPaxStpDrfQy8Y8Y2AaKFEXuzBc3
#> Eh0gmV+nnU6lkXLRJSCOEtTVtQ7vxMZNjMW6sGNeHkX8M55pvwNonbWhE2T/G8/8
#> LyQenb4VrlcFYN/JmL6yuCgo5nqHEWzmji2qAngI3ohB1naP1WHs5AaPJw==
#> -----END CERTIFICATE-----
#> ',''),rs=c(10407,841276001,-520434514,-1310147241,106170796,1601534653,-485760422))"

The printed value may be deployed directly on a remote machine.

« Back to ToC

CA Signed Certificates

As an alternative to the zero-configuration default, a certificate may also be generated via a Certificate Signing Request (CSR) to a Certificate Authority (CA), which may be a public CA or a CA internal to an organisation.

  1. Generate a private key and CSR. The following resources describe how to do so:
  1. Provide the generated CSR to the CA for it to sign a new TLS certificate.
  1. When setting daemons, the TLS certificate and private key should be provided to the ‘tls’ argument of daemons().
  1. When launching daemons, the certificate chain to the CA should be supplied to the ‘tls’ argument of daemon() or launch_remote().

« Back to ToC

Compute Profiles

The daemons() interface also allows the specification of compute profiles for managing tasks with heterogeneous compute requirements:

Simply specify the argument .compute when calling daemons() with a profile name (which is ‘default’ for the default profile). The daemons settings are saved under the named profile.

To create a ‘mirai’ task using a specific compute profile, specify the ‘.compute’ argument to mirai(), which defaults to the ‘default’ compute profile.

Similarly, functions such as status(), launch_local() or launch_remote() should be specified with the desired ‘.compute’ argument.

« Back to ToC

Errors, Interrupts and Timeouts

If execution in a mirai fails, the error message is returned as a character string of class ‘miraiError’ and ‘errorValue’ to facilitate debugging. is_mirai_error() may be used to test for mirai execution errors.

m1 <- mirai(stop("occurred with a custom message", call. = FALSE))
m1[]
#> 'miraiError' chr Error: occurred with a custom message
m2 <- mirai(mirai::mirai())
m2[]
#> 'miraiError' chr Error in mirai::mirai(): missing expression, perhaps wrap in {}?
is_mirai_error(m2$data)
#> [1] TRUE
is_error_value(m2$data)
#> [1] TRUE

A full stack trace of evaluation within the mirai is recorded and accessible at $stack.trace on the error object.

f <- function(x) if (x > 0) stop("positive")
m3 <- mirai({f(-1); f(1)}, f = f)
m3[]
#> 'miraiError' chr Error in f(1): positive
m3$data$stack.trace
#> [[1]]
#> [1] "function(x) if (x > 0) stop(\"positive\")"
#> 
#> [[2]]
#> [1] "f(1)"

If a daemon instance is sent a user interrupt, the mirai will resolve to an object of class ‘miraiInterrupt’ and ‘errorValue’. is_mirai_interrupt() may be used to test for such interrupts.

is_mirai_interrupt(m2$data)
#> [1] FALSE

If execution of a mirai surpasses the timeout set via the ‘.timeout’ argument, the mirai will resolve to an ‘errorValue’ of 5L (timed out). This can, amongst other things, guard against mirai processes that have the potential to hang and never return.

m4 <- mirai(nanonext::msleep(1000), .timeout = 500)
m4[]
#> 'errorValue' int 5 | Timed out
is_mirai_error(m4$data)
#> [1] FALSE
is_mirai_interrupt(m4$data)
#> [1] FALSE
is_error_value(m4$data)
#> [1] TRUE

is_error_value() tests for all mirai execution errors, user interrupts and timeouts.

« Back to ToC

Serialization: Arrow, polars and beyond

Native R serialization is used for sending data between host and daemons. Some R objects by their nature cannot be serialized, such as those accessed via an external pointer. In these cases, performing ‘mirai’ operations on them would normally error.

Using the arrow package as an example:

library(arrow, warn.conflicts = FALSE)
daemons(2)
#> [1] 2
everywhere(library(arrow))

x <- as_arrow_table(iris)

m <- mirai(list(a = head(x), b = "some text"), x = x)
m[]
#> 'miraiError' chr Error: Invalid <Table>, external pointer to null

However it is possible to register custom serialization and unserialization functions as ‘refhooks’ or hooks into R’s native serialization mechanism for reference objects.

It is only required to specify them once upfront, as a list of functions. The argument ‘class’ must also be specified to register them for this class of object only.

serialization(
  refhook = list(
    arrow::write_to_raw,
    function(x) arrow::read_ipc_stream(x, as_data_frame = FALSE)
  ),
  class = "ArrowTabular"
)

m <- mirai(list(a = head(x), b = "some text"), x = x)
m[]
#> $a
#> Table
#> 6 rows x 5 columns
#> $Sepal.Length <double>
#> $Sepal.Width <double>
#> $Petal.Length <double>
#> $Petal.Width <double>
#> $Species <dictionary<values=string, indices=int8>>
#> 
#> See $metadata for additional Schema metadata
#> 
#> $b
#> [1] "some text"

It can be seen that this time, the arrow table is seamlessly handled in the ‘mirai’ process. This is the case even when the object is deeply nested inside lists or other structures.

To change registered serialization functions, just call serialization() again supplying the new functions. Here we switch to using polars, a ‘lightning fast’ dataframe library written in Rust (requires polars >= 0.16.4).

serialization(
  refhook = list(
    function(x) polars::as_polars_df(x)$to_raw_ipc(),
    polars::pl$read_ipc
  ),
  class = "RPolarsDataFrame"
)

x <- polars::as_polars_df(iris)

m <- mirai(list(a = head(x), b = "some text"), x = x)
m[]
#> $a
#> shape: (6, 5)
#> ┌──────────────┬─────────────┬──────────────┬─────────────┬─────────┐
#> │ Sepal.Length ┆ Sepal.Width ┆ Petal.Length ┆ Petal.Width ┆ Species │
#> │ ---          ┆ ---         ┆ ---          ┆ ---         ┆ ---     │
#> │ f64          ┆ f64         ┆ f64          ┆ f64         ┆ cat     │
#> ╞══════════════╪═════════════╪══════════════╪═════════════╪═════════╡
#> │ 5.1          ┆ 3.5         ┆ 1.4          ┆ 0.2         ┆ setosa  │
#> │ 4.9          ┆ 3.0         ┆ 1.4          ┆ 0.2         ┆ setosa  │
#> │ 4.7          ┆ 3.2         ┆ 1.3          ┆ 0.2         ┆ setosa  │
#> │ 4.6          ┆ 3.1         ┆ 1.5          ┆ 0.2         ┆ setosa  │
#> │ 5.0          ┆ 3.6         ┆ 1.4          ┆ 0.2         ┆ setosa  │
#> │ 5.4          ┆ 3.9         ┆ 1.7          ┆ 0.4         ┆ setosa  │
#> └──────────────┴─────────────┴──────────────┴─────────────┴─────────┘
#> 
#> $b
#> [1] "some text"

To cancel serialization functions entirely:

serialization(NULL)
daemons(0)
#> [1] 0

The ‘vec’ argument to serialization() may be specified as TRUE if the serialization functions are vectorized and take lists of objects, as is the case for safetensors, used for serialization in torch.

Please refer to the torch vignette for further examples.

« Back to ToC

Map Functions

mirai_map() performs asynchronous parallel/distributed map using mirai.

This function is analogous to purrr:map(), but returns a ‘mirai_map’ object.

The results of a mirai_map x may be collected using x[]. This waits for all asynchronous operations to complete if still in progress.

Key advantages of mirai_map():

  1. Returns immediately with all evaluations taking place asynchronously. Printing a ‘mirai map’ object shows the current completion progress.
  2. The ‘.promise’ argument allows a promise to registered against each mirai, which can be used to perform side-effects.
  3. Returns evaluation errors as ‘miraiError’ or ‘errorValue’ as the case may be, rather than causing the entire operation to fail. This allows more efficient recovery from partial failure.
  4. Does not rely on a ‘chunking’ algorithm that attempts to split work into batches according to the number of available daemons, as implemented for example in the parallel package. Chunking cannot take into account varying or unpredictable compute times over the indices. It can be optimal to rely on mirai for scheduling instead. This is demonstrated in the example below.
library(mirai)
library(parallel)
cl <- make_cluster(4)
daemons(4, retry = FALSE)
#> [1] 4
vec <- c(1, 1, 4, 4, 1, 1, 1, 1)
system.time(mirai_map(vec, Sys.sleep)[])
#>    user  system elapsed 
#>   0.001   0.004   4.007
system.time(parLapplyLB(cl, vec, Sys.sleep))
#>    user  system elapsed 
#>   0.007   0.002   5.008
system.time(parLapply(cl, vec, Sys.sleep))
#>    user  system elapsed 
#>   0.006   0.005   8.009

.args is used to specify further constant arguments to .f - the ‘mean’ and ‘sd’ in the example below:

with(
  daemons(3, dispatcher = FALSE),
  mirai_map(1:3, rnorm, .args = list(mean = 20, sd = 2))[]
)
#> [[1]]
#> [1] 17.8442
#> 
#> [[2]]
#> [1] 20.73094 22.37725
#> 
#> [[3]]
#> [1] 18.59455 22.67821 18.91113

Use ... to further specify objects referenced but not defined in .f - the ‘do’ in the anonymous function below:

ml <- mirai_map(
  c(a = 1, b = 2, c = 3),
  function(x) do(x, as.logical(x %% 2)),
  do = nanonext::random
)
#> Warning in mirai_map(c(a = 1, b = 2, c = 3), function(x) do(x, as.logical(x%%2)), : launching one local daemon as none
#> previously set
ml
#> < mirai map [3/3] >
ml[]
#> $a
#> [1] "ae"
#> 
#> $b
#> [1] 8e aa
#> 
#> $c
#> [1] "38d3b7"

Use of mirai_map() assumes that daemons() have previously been set. If not then one (non-dispatcher) daemon is set to allow the function to proceed. This ensures safe behaviour, but is unlikely to be optimal, so please ensure daemons are set beforehand.

When collecting the results, optionally specify .progress or .stop to []:

tryCatch(
  mirai_map(list(a = 1, b = "a", c = 3), sum)[.stop],
  error = identity
)
#> <simpleError: Error in .Primitive("sum")("a"): invalid 'type' (character) of argument>
with(
  daemons(4, dispatcher = FALSE),
  mirai_map(c(0.1, 0.2, 0.3), Sys.sleep)[.progress]
)
#> [[1]]
#> NULL
#> 
#> [[2]]
#> NULL
#> 
#> [[3]]
#> NULL
daemons(0)
#> [1] 0

« Back to ToC