Introduction
R is designed to only use one cpu (or core) when running tasks.
However, a computer may have more than one core that can be used to run
tasks. The use of more than one core is known as parallel computing in
R. The goal of this tutorial is to provide the basics of using the
parallel
package and utilizing more cores in a
computer.
For more information about parallel processing in R, visit the
following sites:
Background
The parallel
package allows you to use multiple cpus at
the same time to complete repetitive tasks. For example, if you have to
run a simulation study with 100 data sets, you can task 4 cpus to
analyze 25 data sets at the same time. This ideally will take a forth of
the time it will take 1 cpu to complete 100 data sets.
The only caveaut with using the parallel
package is that
each process needs to be independent of all processes. If you are
running a for loop that depends on the previous iteration, the cpu
cannot process the task because it has missing information. Therefore,
you will need identify the tasks that are not dependent of one another
and parallelize it.
With the parallel package you can use two family of functions:
mc*()
and par*()
functions. The main
difference between these functions is that mc*()
uses a
forking method to parallelize your code, and the par*()
uses a socket method to parallelize your code. Both methods has their
strengths and weaknesses. When using the HPCC cluster, I recommend using
the mc*()
functions when possible. There are brief
descriptions for each method below.
Forking
The forking method is implemented when using the mc*()
functions. Below is a list of pros and cons:
Pros:
Easier to implement
Your environment is copied to the cpu
Has the potential to use less resources
Wraps USER function with the try()
function
Cons:
Only works on POSIX systems (Mac or Linux)
Does not work with multiple nodes
May cause cross-contamination because the environment may
change
Generally speaking, R copies its environment to a cpu and runs the
process.
Socket
The socket method is implemented when using the par*()
functions. Below is a list of pros and cons:
Pros:
Works on any system (Windows, Mac, and Linux)
Each process is unique
No cross-contamination
Can be used when multiple nodes are involved
Cons:
The entire environment (including loaded functions) must be
specified
More resource intensive
try()
function is not implemented, when one cpu
fails, the entire cluster fails
Generally speaking, R creates a new environment on each cpu.
Where to parallelize?
Parallel computing requires repetitive tasks to be independent of
each other. The results from one task must not be used as input for
another task. Therefore, identify blocks of code where repetition is
involved. This can be either for
loops or
*apply()
functions.
Other locations to parallelize your code are bottlenecks. Having
multiple cores processing bottlenecks helps speed up your code. For more
information about bottlenecks or code efficiency, visit the Advanced R
website: adv-r.hadley.nz.
Identifying the location to parallelize may be challenging due to so
many options. My advice is to try many things and see what provides the
best result.
Simulation Example
To demonstrate how to use the parallel
package in R, we
conduct a simple simulation study showing how the estimates from the
ordinary least squares estimator leads to unbiased results.
\[
Y \sim N(\beta_0+X_1\beta_1+X_4\beta_5+X_3\beta_3,\sigma^2)
\]
Simulation Functions
The function below generates data from the model above and returns a
data frame.
data_sim <- function(seed, nobs, beta, sigma, xmeans, xsigs){
set.seed(seed)
xrn <- rmvnorm(nobs, mean = xmeans, sigma = xsigs)
xped <- cbind(rep(1,nobs),xrn)
y <- xped %*% beta + rnorm(nobs ,0, sigma)
df <- data.frame(x=xrn, y=y)
return(df)
}
The function needs the following arguments:
seed
: the value to set for the random number
generator
nobs
: number of observations
beta
: a vector specifying the true values for the
regression coefficients (\(\beta_0\),
\(\beta_1\), \(\beta_2\), \(\beta_3\))
sigma
: the variance for the model above
xmeans
: a vector of means used to generate the values
for \(X_1\), \(X_2\), and \(X_3\)
xsigs
: a matrix for the covariance for \(X_1\), \(X_2\), and \(X_3\)
The function below takes the data generated from the
data_sim()
and fits a linear regression model. The function
wraps around the lm()
function and returns estimated
regression coefficients.
parallel_lm <- function(data){
lm_res <- lm(y ~ x.1 + x.2 + x.3, data = data)
return(coef(lm_res))
}
Loading R Packages
For this tutorial, you will need the parallel
and
mvtnorm
packages.
library(parallel)
library(mvtnorm)
Setting Parameters
N <- 10000
nobs <- 200
beta <- c(5, 4, -5, -3)
xmeans <- c(-2, 0, 2)
xsigs <-diag(rep(1, 3))
sig <- 3
Simulating Data
start <- Sys.time()
parallel_data <- lapply(c(1:N), data_sim,
nobs = nobs, beta = beta, sigma = sig, xmeans = xmeans, xsigs = xsigs)
Sys.time()-start
Time difference of 2.903658 secs
Parallelization
Detecting Cores
The parallel
package has a function that can detect the
number of cores available on your computer. Use the
detectCores()
function to find how many cpus your computer
has.
detectCores()
[1] 8
For this example, I will use 8 cpus. You can change the number of
cpus based on your computer.
ncores <- 8
Using lapply()
Before we parallelize, use the lapply()
function to run
the simulation on one cpu.
start <- Sys.time()
lapply_results <- lapply(parallel_data, parallel_lm)
(lapply_time <- Sys.time()-start)
Time difference of 5.015617 secs
Using mclapply()
The easiest way to parallelize is to use the mclapply()
function. All you need to do is change the lapply()
function to mclapply()
and add the argument
mc.cores = ncores
as the last argument. The argument
mc.cores
specifies the number of cores for
parallelization.
start <- Sys.time()
mclapply_results <- mclapply(parallel_data, parallel_lm, mc.cores = ncores)
(mclapply_time <- Sys.time()-start)
Time difference of 1.710228 secs
Using parLapply()
Functions
Using parLapply()
function requires setting up the
cluster. There are two functions needed to set up the cluster:
makeCluster()
and clusterExport()
. Once your
analysis is finished, you will need to close the cluster using the
stopCluster()
function.
makeCluster()
The makeCluster()
function sets up the cluster in your
computer. You only need to specify the number of cores to use
(ncores
) and store it in an R object (cl
)
clusterExport()
The clusterExport()
function is needed to export
user-created functions, package functions, or data to the cluster. You
will need to specify the cluster (cl
) to export the
information and the name of the functions or data in a character
vector.
parLapply()
The parLapply()
is the parallel computing component. It
is used the same way as the lapply()
function. The only
difference is that you will need to specify the cluster
(cl
) as the first argument. The second argument goes as the
lapply()
function.
stopCluster()
The stopCluster()
function closes the cluster down.
Implementation
The code below implements the parLapply()
function:
start <- Sys.time()
cl <- makeCluster(ncores)
clusterExport(cl, c("parallel_lm"))
parLapply_results<-parLapply(cl,parallel_data,parallel_lm)
stopCluster(cl)
(parLapply_time <- Sys.time()-start)
Time difference of 2.937489 secs
Results
Since all the results are in a list, we will first convert them into
a matrix:
lapply_mat <- matrix(unlist(lapply_results), nrow = 4)
mclapply_mat <- matrix(unlist(mclapply_results), nrow = 4)
parLapply_mat <- matrix(unlist(parLapply_results), nrow = 4)
Use the rowMeans()
on all objects to show each method
provides consistent estimates of the regression parameters.
rowMeans(lapply_mat)
[1] 4.999838 4.000874 -4.999656
[4] -2.996343
rowMeans(mclapply_mat)
[1] 4.999838 4.000874 -4.999656
[4] -2.996343
rowMeans(parLapply_mat)
[1] 4.999838 4.000874 -4.999656
[4] -2.996343
Notes
Speed Gains
When parallelizing your code, the speed gains may not be noticeable.
When using the parallel
package, R assigns elements of a
list or vector into different cores, known as overhead, which takes
time. The time it takes to load the data may be longer than processing
the data. Therefore, it may not be faster or take longer than using the
*apply()
functions. It may be best to use one core or split
up the data on your own. However, when processing your data takes much
longer than your overhead time, the speed gains are noticeable.
Hyper-threading
Certain computer processors have cores that are multi-threaded
(hyper-threading). This means the computer views 1 physical core as 2
logical cores. While you certainly specify double the number of physical
cores for your cluster, the speed gains is not equivalent as using
proper physical cores. In my experience, it is not worth using the
logical cores and just specify the number of physical cores in your
computer.
---
title: "Parallel Computing in R with `parallel`"
author: "Isaac Quintanilla Salinas"
description: Tutorial for parallel processing using the R package parallel.
date: "12-19-2020"
output: 
  html_notebook:
    toc: true
    toc_float: true
---

```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(cache = TRUE)
```

# Introduction

R is designed to only use one cpu (or core) when running tasks. However, a computer may have more than one core that can be used to run tasks. The use of more than one core is known as parallel computing in R. The goal of this tutorial is to provide the basics of using the `parallel` package and utilizing more cores in a computer.

For more information about parallel processing in R, visit the following sites:

-   <https://nceas.github.io/oss-lessons/parallel-computing-in-r/parallel-computing-in-r.html>
-   <https://dept.stat.lsa.umich.edu/~jerrick/courses/stat701/notes/parallel.html>
-   <https://psu-psychology.github.io/r-bootcamp-2018/talks/parallel_r.html>

# Background

The `parallel` package allows you to use multiple cpus at the same time to complete repetitive tasks. For example, if you have to run a simulation study with 100 data sets, you can task 4 cpus to analyze 25 data sets at the same time. This ideally will take a forth of the time it will take 1 cpu to complete 100 data sets.

The only caveaut with using the `parallel` package is that each process needs to be independent of all processes. If you are running a for loop that depends on the previous iteration, the cpu cannot process the task because it has missing information. Therefore, you will need identify the tasks that are not dependent of one another and parallelize it.

With the parallel package you can use two family of functions: `mc*()` and `par*()` functions. The main difference between these functions is that `mc*()` uses a forking method to parallelize your code, and the `par*()` uses a socket method to parallelize your code. Both methods has their strengths and weaknesses. When using the HPCC cluster, I recommend using the `mc*()` functions when possible. There are brief descriptions for each method below.

## Forking

The forking method is implemented when using the `mc*()` functions. Below is a list of pros and cons:

### Pros:

-   Easier to implement

-   Your environment is copied to the cpu

-   Has the potential to use less resources

-   Wraps USER function with the `try()` function

### Cons:

-   Only works on POSIX systems (Mac or Linux)

-   Does not work with multiple nodes[^1]

-   May cause cross-contamination because the environment may change[^2]

[^1]: You can think of as nodes as the processor. Each processor has a set number of cpus, if you need more cpus, you will need to use another node.

[^2]: Mostly occurs with random number generation; however, this is generally not a problem.

Generally speaking, R copies its environment to a cpu and runs the process.

## Socket

The socket method is implemented when using the `par*()` functions. Below is a list of pros and cons:

### Pros:

-   Works on any system (Windows, Mac, and Linux)

-   Each process is unique

-   No cross-contamination

-   Can be used when multiple nodes are involved

### Cons:

-   The entire environment (including loaded functions) must be specified

-   More resource intensive

-   `try()` function is not implemented, when one cpu fails, the entire cluster fails

Generally speaking, R creates a new environment on each cpu.

## Where to parallelize?

Parallel computing requires repetitive tasks to be independent of each other. The results from one task must not be used as input for another task. Therefore, identify blocks of code where repetition is involved. This can be either `for` loops or `*apply()` functions.

Other locations to parallelize your code are bottlenecks. Having multiple cores processing bottlenecks helps speed up your code. For more information about bottlenecks or code efficiency, visit the Advanced R website: [adv-r.hadley.nz](https://adv-r.hadley.nz/).

Identifying the location to parallelize may be challenging due to so many options. My advice is to try many things and see what provides the best result.

# Simulation Example

To demonstrate how to use the `parallel` package in R, we conduct a simple simulation study showing how the estimates from the ordinary least squares estimator leads to unbiased results.

$$
Y \sim N(\beta_0+X_1\beta_1+X_4\beta_5+X_3\beta_3,\sigma^2)
$$

## Simulation Functions

The function below generates data from the model above and returns a data frame.

```{r}
data_sim <- function(seed, nobs, beta, sigma, xmeans, xsigs){
  set.seed(seed)
  xrn <- rmvnorm(nobs, mean = xmeans, sigma = xsigs)
  xped <- cbind(rep(1,nobs),xrn)
  y <- xped %*% beta + rnorm(nobs ,0, sigma)
  df <- data.frame(x=xrn, y=y)
  return(df)
}
```

The function needs the following arguments:

-   `seed`: the value to set for the random number generator
-   `nobs`: number of observations
-   `beta`: a vector specifying the true values for the regression coefficients ($\beta_0$, $\beta_1$, $\beta_2$, $\beta_3$)
-   `sigma`: the variance for the model above
-   `xmeans`: a vector of means used to generate the values for $X_1$, $X_2$, and $X_3$
-   `xsigs`: a matrix for the covariance for $X_1$, $X_2$, and $X_3$

The function below takes the data generated from the `data_sim()` and fits a linear regression model. The function wraps around the `lm()` function and returns estimated regression coefficients.

```{r}
parallel_lm <- function(data){
  lm_res <- lm(y ~ x.1 + x.2 + x.3, data = data)
  return(coef(lm_res))
}
```

### Loading R Packages

For this tutorial, you will need the `parallel` and `mvtnorm` packages.

```{r}
library(parallel)
library(mvtnorm)
```

### Setting Parameters

```{r}
N <- 10000
nobs <- 200
beta <- c(5, 4, -5, -3)
xmeans <- c(-2, 0, 2)
xsigs <-diag(rep(1, 3))
sig <- 3
```

### Simulating Data

```{r}
start <- Sys.time()
parallel_data <- lapply(c(1:N), data_sim,
                          nobs = nobs, beta = beta, sigma = sig, xmeans = xmeans, xsigs = xsigs)
Sys.time()-start
```

# Parallelization

## Detecting Cores

The `parallel` package has a function that can detect the number of cores available on your computer. Use the `detectCores()` function to find how many cpus your computer has[^3].

[^3]: The function provides the number of logical cores and not physical cores. This is due to multithreading.

```{r}
detectCores()
```

For this example, I will use 8 cpus. You can change the number of cpus based on your computer.

```{r}
ncores <- 8
```

## Using `lapply()`

Before we parallelize, use the `lapply()` function to run the simulation on one cpu.

```{r}
start <- Sys.time()
lapply_results <- lapply(parallel_data, parallel_lm)
(lapply_time <- Sys.time()-start)
```

## Using `mclapply()`

The easiest way to parallelize is to use the `mclapply()` function. All you need to do is change the `lapply()` function to `mclapply()` and add the argument `mc.cores =  ncores` as the last argument. The argument `mc.cores` specifies the number of cores for parallelization.

```{r}
start <- Sys.time()
mclapply_results <- mclapply(parallel_data, parallel_lm, mc.cores = ncores)
(mclapply_time <- Sys.time()-start)
```

## Using `parLapply()`

### Functions

Using `parLapply()` function requires setting up the cluster. There are two functions needed to set up the cluster: `makeCluster()` and `clusterExport()`. Once your analysis is finished, you will need to close the cluster using the `stopCluster()` function.

#### `makeCluster()`

The `makeCluster()` function sets up the cluster in your computer. You only need to specify the number of cores to use (`ncores`) and store it in an R object (`cl`)

#### `clusterExport()`

The `clusterExport()` function is needed to export user-created functions, package functions, or data to the cluster. You will need to specify the cluster (`cl`) to export the information and the name of the functions or data in a character vector.

#### `parLapply()`

The `parLapply()` is the parallel computing component. It is used the same way as the `lapply()` function. The only difference is that you will need to specify the cluster (`cl`) as the first argument. The second argument goes as the `lapply()` function.

#### `stopCluster()`

The `stopCluster()` function closes the cluster down.

### Implementation

The code below implements the `parLapply()` function:

```{r}
start <- Sys.time()
cl <- makeCluster(ncores)
clusterExport(cl, c("parallel_lm"))
parLapply_results<-parLapply(cl,parallel_data,parallel_lm)
stopCluster(cl)
(parLapply_time <- Sys.time()-start)
```

# Results

Since all the results are in a list, we will first convert them into a matrix:

```{r}
lapply_mat <- matrix(unlist(lapply_results), nrow = 4)
mclapply_mat <- matrix(unlist(mclapply_results), nrow = 4)
parLapply_mat <- matrix(unlist(parLapply_results), nrow = 4)
```

Use the `rowMeans()` on all objects to show each method provides consistent estimates of the regression parameters.

```{r}
rowMeans(lapply_mat)
rowMeans(mclapply_mat)
rowMeans(parLapply_mat)
```

# Notes

## Speed Gains

When parallelizing your code, the speed gains may not be noticeable. When using the `parallel` package, R assigns elements of a list or vector into different cores, known as overhead, which takes time. The time it takes to load the data may be longer than processing the data. Therefore, it may not be faster or take longer than using the `*apply()` functions. It may be best to use one core or split up the data on your own. However, when processing your data takes much longer than your overhead time, the speed gains are noticeable.

## Hyper-threading

Certain computer processors have cores that are multi-threaded (hyper-threading). This means the computer views 1 physical core as 2 logical cores. While you certainly specify double the number of physical cores for your cluster, the speed gains is not equivalent as using proper physical cores. In my experience, it is not worth using the logical cores and just specify the number of physical cores in your computer.
