Mining monero with nvidia GPU in unpriviliged container

The problem

As my interest in crypto currency grew in the last couple of weeks, I decided an attempt at mining them would be a good idea. So I went out and got some second hand GPU’s (GTX 760 oc) and wanted to mine monero with them.

Now we have the problem of running the miner software on our operating system directly (which in my humble opinion is not always secure and a high value target). This could be solved by virtualization, but this gives a lot of overhead. Looking into LXC containers, this seems this is a viable solution as the overhead is minimized and privileges can be minimized.

Containers should not be your last line of defence and should be run unprivileged. So in this post I give a short note on how I have set things up considering the above.

Used software: LXC, Nvidia CUDA, XMR-STAK, Ubuntu 16.04 amd64

Setting up CUDA

Firstly we need a NVIDIA GPU supported by cuda, and of course the proprietary software. You can find CUDA here:

I use ubuntu 16.04 amd64 in this post, as it is user friendly and will be supported for a long time to come.

Before you install CUDA do an apt-get update and apt-get dist-upgrade and reboot

You will have to install CUDA twice, once now and once in the container. The installation is pretty straightforward, just follow the instructions. The last step however I would recommend using apt-get with the flag --no-install-recommends, this installs a lot less unneeded packages. The command will be: sudo apt-get --no-install-recommends install cuda

After installing cuda reboot, so the nouveau driver is replaced by the nvidia driver.

Check if everything is okay by typing the command nvidia-smi. This should give you output about your card and shows the nvidia driver is working.

Setting up the container

The container setup is well described at

I would choose to do the network setup a little differently, so the container can be accessed on the network with ssh.  So I will create a network bridge first by editing /etc/network/interfaces and replacing the primary network interface line with:

iface <interface> inet manual

and adding the lines:

auto br0
iface br0 inet dhcp
bridge_ports <interface>
bridge_stp off
bridge_maxwait 0
bridge_fd 0

To run LXC containers (and bridges) we need to install the LXC package:
sudo apt-get install lxc
The easiest way to get your new network setup running is by rebooting your system.
After the reboot we create a lxc-user
sudo useradd -m lxc
then become the lxc user:
sudo -s
su - lxc

Get the subuid and subgid and copy these to clipboard / leafpad / gedit / etc.
grep lxc /etc/sub*
Now create the config directory:
mkdir -p .config/lxc
Create the file default.conf in .config/lxc with the following content:
lxc.id_map = u 0 <subuid> 65536
lxc.id_map = g 0 <subgid> 65536 = veth = br0

Now add the following line to /etc/lxc/lxc-usernet:
lxc veth br0 10
Install the container:
lxc-create -n<container name> -t download
choose ubuntu, xenial, amd64
Start the container:
lxc-start -n<container name>
Attach to the container:
lxc-attach -n<container name>
This will give some errors, which can be ignored. You should get a root prompt. Append some stuff to $PATH:
export PATH=$PATH:/sbin:/usr/sbin
Now you can install packages you like (openssh-server, sudo, vim) and add a user and add this user to the sudo group:
apt-get install openssh-server vim sudo
adduser <username>
adduser <username> sudo

Now you can close the terminal your working on and ssh into the container.

Setting up CUDA in the container

Setting up CUDA in the container is exactly the same as on the host. After setting up CUDA we need access to the GPU in the container. The devices in /dev starting with nvidia (like /dev/nvidia0 should be bind mounted into the container. This can be done by adding a line for each device to /home/lxc/.local/share/lxc/<container name>/config
lxc.mount.entry = /dev/nvidia<xxx> dev/nvidia<xxx> none bind,optional,create=file
After this the container should be shutdown and started as the lxc user as shown earlier.
Now the command nvidia-smi should work in your container too.

Setting up XMR-STAK in the container

In the container:
sudo apt-get install git libmicrohttpd-dev libssl-dev cmake build-essential
git clone
cd xmr-stak
mkdir build
cd build
make -j$(nproc)

Now XMR-STAK should be built. You can run it by:
cd bin

Answer the questions and you should be ready to mine in an unprivileged container.