Setting up NextCloud in Qubes with Offsite Duplicity Backup


I needed a solution at home for managing files with a few high level requirements.


There are a few options available out there for providing local cloud storage. has a good summary and is where I started.

I primarily went with Nextcloud because they had claimed support for encryption at rest and end to end encryption. I eventually decided against using their encryption at rest model (more details on that later) and their end to end encryption is not fully released yet.

Seafile and Pydio are also very popular and see active development. I had done a more detailed comparison but didn’t write it down :(

Nextcloud has a very nice polished UI and windows/linux/android apps that all work very well (I don’t have any iOS devices to test those).

My Environment

I would say my setup is far from ordinary and I don’t know how many people will have something similar, but hey, if you do this may be a helpful blog for you!

I have a server setup with Qubes OS. I know Qubes is not really a server OS, but under the hood it’s a Xen hypervisor. I’ve made some customizations to my instance and setup and have found it actually makes a pretty good local server.

 1 2 3 4 5 6 7 8 91011
                           |                                      |
                           | Qubes Server                         |
                           |  ___________            ___________  |    
                   HTTPS   | |           |  Reverse |           | |        _________
                  Traffic  | | Existing  |   proxy  | Nextcloud | |  NFS  |         |
                    -------->| Webserver | -------->|    VM     |-------->|  Drobo  |
                           | |    VM     |          |           | |       |_________|
                           | |___________|          |___________| |

The internal Qubes layout is a little more complicated but it’s hard drawing shit in ascii so I skipped the sys-net and sys-firewall vm’s which would also handle the traffic.

I’m using the Drobo for storage because I already have it and it has a lot of space on it. Also, don’t buy a drobo.

Install Nextcloud

I largely followed the instructions provided by nextcloud:

apt-get install apache2 mariadb-server libapache2-mod-php7.0
apt-get install php7.0-gd php7.0-json php7.0-mysql php7.0-curl php7.0-mbstring
apt-get install php7.0-intl php7.0-mcrypt php-imagick php7.0-xml php7.0-zip

Download the Nextcloud zip, checksum it and “install” it:

sha256sum  -c <
cp -r nextcloud /var/www
sudo chown -R www-data:www-data /var/www/nextcloud/

Setup Apache

Create /etc/apache2/sites-available/nextcloud.conf and paste:

 1 2 3 4 5 6 7 8 91011121314
Alias / "/var/www/nextcloud/"

<Directory /var/www/nextcloud/>
  Options +FollowSymlinks
  AllowOverride All

 <IfModule mod_dav.c>
  Dav off

 SetEnv HOME /var/www/nextcloud
 SetEnv HTTP_HOME /var/www/nextcloud


NOTE: My entry is different from the installation instructions, specifically I changed the alias from /nextcloud to / because I want my URL’s to not contain nextcloud this makes the apache reverse proxying easier.

 1 2 3 4 5 6 7 8 910
sudo a2ensite nextcloud.conf
sudo a2enmod rewrite
sudo a2enmod headers
sudo a2enmod env
sudo a2enmod dir
sudo a2enmod mime
sudo a2enmod ssl
sudo a2ensite default-ssl
sudo service apache2 reload
sudo service apache2 restart

Setup the Share

Install cifs:

sudo apt install cifs-utils

Create the share dir:

cd /srv
sudo mkdir nextcloud_enc

Setup the persistent mount for the files, edit /etc/fstab:

// /srv/nextcloud cifs credentials=/home/user/.nextcloud_share_creds,uid=www-data,gid=www-data,file_mode=0770,dir_mode=0770,iocharset=utf8,sec=ntlm 0 0

Make the credentials file in /home/user/.nextcloud_share_creds with:


Then make it read only for the user:

chmod 600 /home/user/.nextcloud_share_creds

Test/mount it:

sudo mount -a

Setup Encryption

For a few reasons I opted out of using the built in nextcloud encryption and instead am doing it outside the app.

This was my biggest concern, the only way to decrypt files is through the app which requires an accurate database as well as all the files intact. The encryption is complicated for nextcloud, the file signatures require the db to be reconstructed, the keys are maintained per file and encrypted with user passwords. It looks like they worked hard to make the encryption good but there is no practical way to decrypt files outside the app. There were a few threads about this with no resolution (or none to my liking).

On the one hand we want to make sure our backups are encrypted. But on the other if they are encrypted we would have to re-upload entire files for even a one byte change not allowing for any partial backups to save time/bandwidth. Duplicity can work a lot more efficiently if it’s backing up the unencrypted files, it also offers its own encryption which can very simply be decrypted if you have the keys. This not only allows for much more bandwidth and space efficient backups, but a very simple decryption process to restore the backups, vs setting up a full nextcloud instance and recovering the db to access files.

Having ruled out the internal nextcloud encryption, I decided a FUSE based solution would do a good job of getting what I want. There are a few options in this space. A popular choice for many years has been EncFs, however a security audit done not too long ago pointed out some pretty big flaws that hadn’t been addressed to my liking.

GoCryptFS is one of the more recent offerings and has a pretty good comparison of the contenders:

I ended up picking GoCryptFS because it fit my needs the best.

PLEASE NOTE: my threat model may be much different from yours. My primary reason for encrypting files at rest relates to using a Drobo and network file share for storing files. My primary concern was a disk failure in the Drobo for which I would like to warranty the drive, and I don’t want copies of all my tax returns and sensitive docs heading out the door to Seagate or Western Digital. Gogryptfs does a very good job of protecting us from someone getting one time access to the encrypted files, but it does less of a good job protecting us from someone who has constant access to the changing encrypted files, and it’s probably not trustworthy at all if someone has access to create plain text files and see the encrypted result. If your threat model is different than mine you may want to consider a different encryption scheme.

There was an audit done which talks about these concerns:

To install, download the latest release:

NOTE: I generally would prefer to install software via maintained packages, and debian has a package for gocryptfs, however it’s several versions behind and I wanted something newer. The problem is, now I’m on the hook to check for new releases. Github has no way to monitor a repo for releases only (they do however have a 2+ year old issue requesting this :( ) There is no mailing list, there is nothing but checking manually :(

Extract it into /usr/home/bin/.

You need to init the dir first, make sure your new mount above is mounted:

sudo -u www-data /home/user/bin/gocryptfs -init nextcloud_enc/

Make the non-encrypted dir and change some permissions on it:

cd /srv
sudo mkdir nextcloud
sudo chown www-data:www-data nextcloud
sudo chmod o-rx nextcloud

Then do a test mount to get the master key:

sudo -u www-data /home/user/bin/gocryptfs nextcloud_enc nextcloud

NOTE: Save the master key that is provided when you mount the dir!!!!

You can try adding a few files and see them show up encrypted in nextcloud_enc.

You can unmount it with umount:

sudo umount /srv/nextcloud

Create a password file for automounting and put the decryption passphrase in it:

cd ~
vi .nextcloud_goencryptfs_key
chmod 600 .nextcloud_goencryptfs_key

I then tried to put this mount in fstab, and I could manually get it to work with this entry:

/srv/nextcloud_enc /srv/nextcloud fuse./home/user/bin/gocryptfs nodev,nosuid,passfile=/home/user/.nextcloud_goencryptfs_key,force_owner=33:33,quiet,allow_other 0 0

However, on reboots it wasn’t working, seems like a timing issue with the network mount, so instead, I put an entry in the qubes /rw/config/rc.local file:

/home/user/bin/gocryptfs -passfile /home/user/.nextcloud_goencryptfs_key -force_owner 33:33 -quiet -allow_other /srv/nextcloud_enc /srv/nextcloud

If you aren’t using Qubes, you’ll need to find another way to get the encrypted filesystem mounted at startup.

After a few reboots, this seems reliable.

Update: After setting up the app and rebooting, I am having trouble with this mount also, the app writes to the nextcloud.log file before the mount happens which causes gocryptfs to throw an error and fail to mount

To avoid these race conditions I ideally need a script which sequentially mounts the network share, encrypted filesystem, and then starts apache.

Setup MariaDB

Setup mariadb (root pass is none, set it, delete/disable/remove all the testing options):

sudo mysql_secure_installation

Create the user/database (you have to login to mysql with sudo):

sudo mysql -u root -p
CREATE USER 'username'@'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON nextcloud.* TO 'username'@'localhost' IDENTIFIED BY 'password';

Final App Setup

Open the app and run through the setup.


Then you can make some additional tweaks to the next cloud config in /var/www/nextcloud/config/config.php:

 1 2 3 4 5 6 7 8 910111213
'trusted_domains' => 
array (
  0 => 'localhost',
  1 => '',
'trusted_proxies' =>
array (
  0 => '',
'overwrite.cli.url' => '',
'htaccess.RewriteBase' => '/',

The first entry allows requests from our dns, the second allows requests from the proxy on my other Qubes VM, the third re-writes URL’s to remove the index.php entry.

Then run:

sudo -u www-data php /var/www/nextcloud/occ maintenance:update:htaccess

And this will update the .htaccess file with the re-write rules.

I also added an entry to the /etc/hosts file so that the DNS would work in URL’s:


Reverse Proxy

Setting up the reverse proxy, this is probably unique to my situation more than others, because I have another ‘Qube’ on this machine which is receiving traffic on port 443 and has my Lets Encrypt SSL certs on it, so I wanted to terminate traffic to Nextcloud on this VM and then reverse proxy it over to the Nextcloud VM.

Create a conf file in /etc/apache2/sites-available/file-ssl.conf:

 1 2 3 4 5 6 7 8 9101112131415161718192021222324252627282930313233343536373839404142434445464748495051
<IfModule mod_ssl.c>
<VirtualHost *:443>
        ServerAdmin webmaster@localhost

        DocumentRoot /var/www/file
        <Directory />
                Options FollowSymLinks
                AllowOverride None
        <Directory /var/www/>
                Options Indexes FollowSymLinks MultiViews
                AllowOverride All
                Order allow,deny
                allow from all

        ErrorLog ${APACHE_LOG_DIR}/file_error.log
        LogLevel warn

        CustomLog ${APACHE_LOG_DIR}/file_ssl_access.log combined

        SSLEngine on

        SSLCertificateFile    /etc/letsencrypt/live/
        SSLCertificateKeyFile /etc/letsencrypt/live/
        SSLCertificateChainFile /etc/letsencrypt/live/

        <Proxy *>
                Order deny,allow
                Allow from all
        SSLProxyEngine On
        # These are needed because my downstream server uses a self signed cert
        SSLProxyVerify none 
        SSLProxyCheckPeerCN Off
        SSLProxyCheckPeerName Off
        SSLProxyCheckPeerExpire Off
        ProxyPreserveHost On
        ProxyRequests Off

        ProxyPass / retry=0
        ProxyPassReverse /

NOTE: Notice the retry=0 after the ProxyPass conifg. I had troubles with the proxy throwing 503 Service Unavailable errors, after looking in logs in the proxy server and verifying the app was still running on the nextcloud side, I realized for some reason Apache on the proxy end was thinking the downstream server was down and stopped proxying requests. I decided to just force apache to cut the default wait period from 60s down to 0 with retry=0

Enable the config and reload apache:

sudo a2ensite file-ssl.conf
sudo service apache2 reload

For this to work, the qubes-firewall needs an entry to allow traffic between the proxy server and nextcloud, and nextcloud needs a rule in the iptables to allow incoming traffic from the proxy vm on 443. I can add this in if someone really needs it, comment below.

I also had to update my letsencrypt cert to include the new domain, I’m skipping those steps as well, comment below if you would like to see them.

Setup EXIM for email

I use exim to act as a Mail Transport Agent to forward email to my Runbox account, then Runbox sends the email to the intended destination.

I have runbox setup with app passwords so that I can generate a unique password to use below for auth.

Install exim and reconfigure:

sudo apt-get install exim4-daemon-light
sudo dpkg-reconfigure exim4-config

First Option:

mail sent by smarthost; no local mail

System mail name:


IP-addresses to listen:

1 ; ::1

local domains:


visible domain for local users:


IP address outgoing smarthost:


Keep number of DNS-queries minimal:


Split configuration into small files:


Update /etc/exim4/passwd.client:

### target.mail.server.example:login:password

Create and add this to /etc/exim4/exim4.conf.localmacros:


Stop the service reload the config, start the service:

sudo systemctl stop exim4
sudo update-exim4.conf
sudo systemctl start exim4

Test it out:

echo "test_mail" | mail -s "test_subject"

You can watch /var/log/exim4/mainlog to see what happens, if things are really going wrong shutdown exim4 and run it manually:

sudo exim -qf -d+all

Setup Backup

This is my backup script, I put it in /root and make sure it’s not readable by anyone but root.

Then I symlink the file from /etc/cron.daily

Lastly, on qubes you have to enable cron to run as qubes disables it by default, from dom0:

qvm-service -e nextcloud crond

Here is the backup script I created for nextcloud/duplicity:

 1 2 3 4 5 6 7 8 91011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
startTime=$(date +"%s")
# Export the PASSPHRASE variable
# Get the date
repDate=`date +%Y%m%d`
repSlashDate=`date +%Y/%m/%d`
#Add todays date to the test backup file
echo ${repDate} > /srv/nextcloud/backup_test_file.txt
echo "\n\n###### Nextcloud Files #######\n" >> ${LOGFILE}
/usr/bin/duplicity remove-all-but-n-full 24 --force --s3-use-new-style s3:// >> ${LOGFILE}
/usr/bin/duplicity remove-all-inc-of-but-n-full 2 --force --s3-use-new-style s3:// >> ${LOGFILE}
/usr/bin/duplicity --full-if-older-than 30D /srv/nextcloud/ --exclude /srv/nextcloud/**_local --s3-use-new-style s3:// >> ${LOGFILE}
echo "\n\n###### Nextcloud Mysql/Config #######\n" >> ${LOGFILE}
sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --on >> ${LOGFILE}
mkdir /tmp/nextcloud
/usr/bin/mysqldump --single-transaction -h localhost -u nexapp -pLG7JUxiRwYlL7kPTYO14ICiVISax6f nextcloud > /tmp/nextcloud/nextcloud-sqlbkp.bak
cp /var/www/nextcloud/config/config.php /tmp/nextcloud/
/usr/bin/duplicity remove-all-but-n-full 24 --force --s3-use-new-style s3:// >> ${LOGFILE}
/usr/bin/duplicity remove-all-inc-of-but-n-full 2 --force --s3-use-new-style s3:// >> ${LOGFILE}
/usr/bin/duplicity --full-if-older-than 30D /tmp/nextcloud/ --s3-use-new-style s3:// >> ${LOGFILE}
rm -r /tmp/nextcloud
sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --off >> ${LOGFILE}
echo "\n\n###### Verifying Backup ######\n" >> ${LOGFILE}
/usr/bin/duplicity -t 12h --file-to-restore backup_test_file.txt --s3-use-new-style s3:// /tmp/backup_test_file.txt >> ${LOGFILE}
expectedDate="$(date -d yesterday +%Y%m%d)"
if [ "$recoveredDate" == "$expectedDate" ]; then
        echo "BACKUP TEST PASSED" >> ${LOGFILE}
        echo "BACKUP TEST FAILED!!!!! Expected: ${expectedDate}, but found: ${recoveredDate}" >> ${LOGFILE}
rm /tmp/backup_test_file.txt
endTime=$(date +"%s")
echo "" >> ${LOGFILE}
echo "Execution Time: $(($duration / 60)) minutes $(($duration % 60)) seconds" >> ${LOGFILE}

/usr/bin/mail -s "Nextcloud Backup $repSlashDate" < ${LOGFILE}

One nice feature of this script, I built in a verification of the backup. Each day I insert a file with the current date, then the following day I restore that file from the previous days backup and make sure the date in that file matches the previous days date.

This at least gives me a basic proof that I can recover the remote backup.

Also note that I put an exclusion in for any file/dir which ends with _local. There are some files which I want local copies of with the raid protection of the Drobo, but it’s not worth sending them offsite and paying for bandwidth and offsite S3 costs.

Virus Scanning

Because we are accepting files from many machines, with auto-syncing clients, it seems like a good idea to me to have some kind of virus scanning to look for obvious malware.

ClamAV has been around for ages on Linux systems and still seems popular with regular definition updates.

Installing is straightforward:

sudo apt install clamav

The daemon process seems a little quirky to me, the docs on how it worked and what it could do were a little confusing to me, additionally I couldn’t see any way to get it to automatically email me with issues. So I decided to just run manual scan against the nextcloud directory once a day.

I created the following script:

 1 2 3 4 5 6 7 8 9101112131415161718192021

startTime=$(date +"%s")
repDate=`date +%Y%m%d`

echo "###### ${repDate} Daily Scan  #######" > ${LOGFILE}

/usr/bin/find /srv/nextcloud -ctime -2 -type f -print0 | /usr/bin/xargs -0 clamscan -i --log=${LOGFILE}

endTime=$(date +"%s")
echo "" >> ${LOGFILE}
echo "Execution Time: $(($duration / 60)) minutes $(($duration % 60)) seconds" >> ${LOGFILE}

if [ $RESULT -eq 0 ]; then
        /usr/bin/mail -s "Nextcloud AV Scan OK $repDate" < ${LOGFILE}
        /usr/bin/mail -s "!!! Nextcloud AV MALWARE FOUND $repDate !!!" < ${LOGFILE}

I didn’t want to scan the entire file base every night, so I set out to scan only recently changed files. I have to give a shout out to this blog which was the basis for the line in my script doing the scanning.

I tweaked the date range, and params for clamscan a little to my tastes (only scananing for changes in the last 2 days, and -i only logs infected files).

I wrapped the command with some timing and basic logging. Also I’m looking at the result code of clamscan:

0 : No virus found.  
1 : Virus(es) found.  
2 : Some error(s) occured.

I’m lumping any non-zero result code as “Malware” but at least it will get my attention.

I get an email every day indicating the output of the daily scan. Any files found will be listed, I chose not to auto remove them in case of false positive.


This ended up being a fair amount of work to setup but I’m very happy with the result.

After segmenting my network to the extent that it’s barely usable, it’s really nice to have a mechanism for moving files between machines.

It’s also really nice to have a place we can put important files and financial docs which are backed up offsite.

Nextcloud itself has a nice UI and the apps work great, now let’s cross our fingers that someone is still supporting it a few year from now :/

comments powered by Disqus