| big.data.object | The big data object. |
| c.keyval | Create, project or concatenate key-value pairs |
| dfs.empty | Get a directory or file size or check if it is empty |
| dfs.size | Get a directory or file size or check if it is empty |
| equijoin | Equijoins using map reduce |
| from.dfs | Read or write R objects from or to the file system |
| gather | Functions to split a file over several parts or to merge multiple parts into one |
| increment.counter | Set the status and define and increment counters for a Hadoop job |
| keys | Create, project or concatenate key-value pairs |
| keyval | Create, project or concatenate key-value pairs |
| make.input.format | Create combinations of settings for flexible IO |
| make.output.format | Create combinations of settings for flexible IO |
| mapreduce | MapReduce using Hadoop Streaming |
| rmr | A package to perform Map Reduce computations in R |
| rmr.options | Function to set and get package options |
| rmr.sample | Sample large data sets |
| rmr.str | Print a variable's content |
| scatter | Functions to split a file over several parts or to merge multiple parts into one |
| status | Set the status and define and increment counters for a Hadoop job |
| to.dfs | Read or write R objects from or to the file system |
| to.map | Create map and reduce functions from other functions |
| to.reduce | Create map and reduce functions from other functions |
| values | Create, project or concatenate key-value pairs |
| vsum | Fast small sums |