10 July 2017

How To Create A NAS Using ZFS and Proxmox

Let's virtualize all the things! And also set up a NAS seedbox.

Theory:
The idea is to create a standalone server that uses ZFS, transfer files to it and selectively share those files using file sharing protocols.
CIFS-SMB/HTTP(S)/BitTorrent/NFS

Part 1) Learn
Part 2) Download Prerequisites
Part 3) Create Proxmox Installer USB flash drive
Part 4) Install Proxmox
Part 5) Connect over HTTPS and SSH
Part 6) Update System
Part 7) Configure ZFS
Part 8) Configure "iso" Storage Directory in ZFS Pool
Part 9) Configure Samba/ZFS SMB
Part 10) Connect to ZFS Share
Part 11) Create Container/VM
Part 12) Install and Configure Container OS
Part 13) Share ZFS Mount Point(s) with Container
Part 14) Start rTorrent/ruTorrent configuration

Part 1) Learn

Theory: The idea is to figure out what technologies to use.

Theory:

  • ZFS Theory: What Is ZFS? by Oracle
    • So ZFS is software RAID that extends from disks up through the file-system layer in the computing stack basically.
  • RAID-Z/RAID-Z2/RAID-Z3: ZFS Administration, Part II- RAIDZ.
    • RAIDZ is a software implementation of RAID5/6 on ZFS with excellent capacity, reliability and sub-par performance.
    • RAIDZ levels are good for low I/O archival data. For better performance at the cost of capacity, use mirroring and striping instead.
  • Proxmox Theory: www.proxmox.com/en/proxmox-ve/features
    • A Virtual Machine Manager (VMM) that sits on top of Debian Linux that automates using KVM and qemu. Debian supports ZFS.
  • Seedbox Theory: What is a seedbox?
    • A virtual machine configured as an appliance that focuses on providing BitTorrent services. Why? Reasons.

Wendell's "Proxmox: How To Virtualize All the Things"

ElectronicsWizardry's "Making File+VM server with Proxmox with a ssd cache"

Wendell's is theory centric and ElectronicsWizardry's is more concrete.

Primary Documentation:

Less Helpful Documentation:

Part 2) Prerequisites

Theory: The idea is to have hardware that meets the minimum requirements for ZFS on a NAS and download the specified software.

Hardware prerequisites:

  • Debian AMD64 compatible client and target computer systems
  • 8+ GB RAM, ECC recommended but not required
  • 2+ HDDs
  • Either :
    • 2 flash media usb drives: 1 to install from and 1 to install to.
    • Or
    • 1 optical disk drive and 1 flash media usb drive: install from optical media to usb flash media.
  • A working LAN.

Software Downloads:

  1. Download and install 7-Zip, direct.
  2. Download and install Notepad++, direct.
  3. Download Putty, direct.
  4. Download Proxmox VE 5 iso, direct, (torrent is faster).
  5. For installing proxmox from a USB flash drive, download Etcher portable, or Direct Link.
    • Note: UNetbootin, Rufus and diskpart do not work. Use Etcher.
  6. Download random container files (and also Ubuntu Server 16.04.gz): http://download.proxmox.com/images/system
  7. Download random iso files:

Please download Ubuntu Server 16.04 before continuing: ubuntu-16.04.2-server-amd64.iso.

Part 3) Create Proxmox Installer USB flash drive

Theory: As above.

  1. Insert the installer USB flash drive into the client system.
  2. Extract out Etcher.
  3. Launch Etcher .
  4. Select downloaded Proxmox5.iso
    -Etcher1.png
  5. Click on Flash.
    -Etcher2.png
  6. Wait.
    -Etcher3.png
    • -
      -Etcher4.png
  7. Safely eject usb drive when complete.
    • safely-remove.png
  8. Remove flash drive from computer.

Part 4) Install Proxmox

Theory: Proxmox installer USB created. Now to install Proxmox.

  1. Connect Proxmox installer flash drive into server system.
  2. Insert Proxmox target flash drive or disk into server system.
  3. Boot from the Proxmox installer flash drive.
    • Either set the flash drive to boot in the BIOS/UEFI (Del, F2, Esc)
    • Or do a one-time boot menu, F10 or F12.
    • proxmox-boot.grub.png
    • -
    • prox1.png
  4. Follow the Proxmox installer prompts.
  5. Install to the correct target USB disk or internal disk if using a dedicated one.
    • proxmox-installer-targetdisk.png
  6. Create a strong password for the Proxmox server that is not Password1. Password1 will be used in the examples going forward.
  7. Set a static IP appropriate to your network. 192.168.0.49/24 will be used in the examples going forward.
  8. Call your server something. Like 'server'. kiwi2 will be used in the examples going forward.
  9. Wait for install to finish.
  10. Reboot.
  11. Remove installer USB flash drive
  12. Make sure Proxmox target flash drive is set to boot first in BIOS/UEFI.
    • proxmos-install-pregui.png
    • -
    • proxmox-boot-loginscreen.png
    • The VGA cord, keyboard, mouse can now be unplugged. The only thing that box needs now is a power cord, an Ethernet cord and software configuration information which can be done over HTTPS/SSH.

Part 5) Connect over HTTPS and SSH

Theory: So Proxmox installed. Now to connect to it.

  • Launch Firefox/Chrome
  • Enter in https://192.168.0.49:8006 Note: Use HTTPS not HTTP.
  • Ignore the certificate warning.
  • proxmox-connect-certwarning.png
  • proxmox-connect-certwarning2.png
  • type in credentials
    • username: root
    • password: Password1
    • proxmox-connect-credentials.png
  • leave window open
  • Launch Putty
  • putty-gui.png
  • Enter in 192.168.0.49
  • Port: 22, Connection type: SSH
  • Click Open
  • Ignore the ssh key fingerprint warning
  • login as:
    • username: root
    • password: Password1
      -putty-logged-in.png

With the related documentation, the web interface, and SSH all open, it is time to update proxmox.

Part 6) Fix Some Broken Proxmox Stuff

Theory: Since most of linux is done using a command line, the idea to familiarize the controls in order to work effectively.

The basic controls for command line interfaces are as follows:

  • CTRL+c for "stop that".
  • CTRL+c 3x for "seriously, just stop".
  • CTRL+d for "end input". This is for python mostly.
  • CTRL+q or q for "quit application".
    • The special quit procedure for vi is shift alt : _ q ! ctrl q alt ! :wq power button for 5 seconds. Tip: Use nano instead.
  • Up Arrow to cycle previous commands.
  • Highlight anything with your mouse to automatically copy it.
  • Right-click anything with your mouse to paste.
  • cd to change directory
    • / is "root" the base of the directory tree
    • ~ squiggly line is "home" for the current user. This is usually a folder under /home. For root, home /root.
    • cd ~ Change back to home directory
    • cd tmp Change to the tmp directory from the current directory.
    • cd /tmp Change to the tmp directory from the root directory.
  • ls or dir Display the contents of the current folder. Use ls -la for detailed output. Use dir /b simple output.
    • ls /home List the contents of the /home folder.
  • Tab for "Complete this command for me" or "Give me the available options". If it doesn't work, try it multiple times.
  • Applications with a CLI typically respond to app.exe --help.

Advanced:

  • Home to go to the start of the line.
  • End to go to the last character in the current line.
  • Shift + Page Up to "scroll up", similar to the mouse wheel.
  • Shift + Page Down to "scroll down", similar to the mouse wheel.
  • If you can't be bothered to scroll up, pipe | the input into less. Pipe is the key below the backspace key if you press shift while pressing it. Without shift it is \.
    • app.exe --help | less to watch each page of help, one page at a time. For Windows, use more.
  • whoami Discover your identity.
  • If you can't be bothered to use the mouse, dump standard output (the screen) into a file app.exe --help > temp.txt.
  • cat file.txt and tail file.txt and/or type file.txt mean "dump the contents of this file to standard output (the screen)".
  • touch file.txt means "create an empty file.txt". This is useful to test for write access.

Paste these into putty and press Enter:

Sometimes removes the subscription nag. Credit: Gmck.
sed -i.bak "s/me.updateActive(data)/me.updateCommunity(data)/g" /usr/share/pve-manager/js/pvemanagerlib.js

Fixes ZFS:
/sbin/modprobe zfs

Update repository list:
nano /etc/apt/sources.list

nano is a command line text editor. It can be used to edit text files. Controls:

  • The arrow keys work normally.
  • Enter is a new line.
  • CTRL + o to "write out" the file to the file system after making changes. This is also called saving the file.
  • CTRL + x to exit nano.
  • CTRL + w to search.

repository-list.png

Add one of the following:
If with a subscription, add the first deb repository.
If without a subscription, add the second deb repository.

# Proxmox subscription 
deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise
# Proxmox no subscription 
deb http://download.proxmox.com/debian/pve stretch pve-no-subscription

repository-list.png

CTRL + w
CTRL + x

Then update OS packages:
apt-get update
apt-get upgrade -y

Reboot.
shutdown -r 0
-r means "reboot", -h means "halt" which is another word for shutdown. 0 means "now".
In Windows, shutdown means "logoff" Yeah... Also: use a -t in front of 0 and -s instead of -h:
Windows: shutdown -s -t 0 or shutdown -r -t 0

Close the putty window. Once the system comes back online, connect over putty again. The next step is ZFS configuration.

Part 7) Configure ZFS

Theory: The idea is to create a ZFS pool, the right way.

The first step is to figure out the /dev/id-s for the disks. The zpool command needs those dev/id-s to know which disks will be in the array. If the /dev/sda syntax is used instead, the pool can fail to mount after reboots randomly or after server maintence operations that change the ports/port order each disk is connected to.

While logged in over SSH with Putty to the proxmox server, type the following:

ls /dev/disk
ls /dev/disk/by-id

This will create lots of output similar to the following:

dev_disk_by-id.png

Highlight the sane looking entries in putty:
(that ones that start with ata-Hitachi... or similar highlighted in yellow below)

dev_disk_by-id-highlighted-boxed.png

Paste that garbage into Notepad++.

dev-disk-notepad1.png

View->Word Wrap.
notepadplusplus-wordwrap.png

Remove the duplicates with extra characters. These correspond to partitions on the disks.
Example:
dev-disk-notepad2.png
Then place each drive that will be used in the main pool on a single line separated by spaces. See line #12 above.

zfs stuff is done using the zfspool command to create and manage the raw pool and zfs to create nested filesystems.

zpool --help
zpool status

The create syntax is: zpool create -f -m <mount> <pool> <type> <ids>

  • create: subcommand to create the pool.
  • -f: Force creating the pool to bypass the "EFI label error".
  • -m: The mount point of the pool. If this is not specified, then the pool will be mounted to root as /pool.
  • pool: This is the name of the pool.
  • type: mirror, raidz, raidz2, raidz3. If omitted, the default type is a stripe or raid 0.
  • ids: The names of the drives/partitions to include in the pool obtained from ls /dev/disk/by-id.
  • For 4k native disks use: -o ashift=12
    • 4k disk syntax: zpool create -f -o ashift=12 -m <mount> <pool> <type> <ids>

The zfs pool name is case sensitive; pick something memorable. "storage" mounted at / (root) will be used going forward.

One last thing to do before actually creating the pool. Check to see if the HDDs are advanced format drives:

fdisk -l | grep Units
fdisk -l | grep Sector
cat /sys/class/block/sda/queue/physical_block_size
cat /sys/class/block/sdb/queue/logical_block_size

Check every disk sda,sdb,sdc,sdd,sde.... Do not mix 4k and non-4k drives in the same pool...but if it can't be helped then just use -o ashift=12.
Note: Some disks are 512e drives, 4k native drives that report a sector size of 512.

RaidZ2 Example:

; Create a zpool with pool name "storage"
zpool create -f storage raidz2 <ids>
; Or specify a mount point
zpool create -f -m /mnt/storage storage raidz2 <ids>
; Or if using 4k native disks
zpool create -f -o ashift=12 storage raidz2 <ids>

The <ids> in the above commands correspond to the list of disks that was put on one line above in Notepad++.

The literal command entered to create the zpool should look like this for an 8-disk pool:

zpool create -f storage raidz2 ata-Hitachi_HUA722020ALA330_JK11A8B9K9U54F ata-Hitachi_HUA722020ALA330_JK11A8B9KP866F ata-Hitachi_HUA722020ALA331_B9G5VSWF ata-Hitachi_HUA722020ALA331_B9G794PF ata-Hitachi_HUA722020ALA331_B9G7WEKF ata-Hitachi_HUA722020ALA331_B9GWPB7T ata-Hitachi_HUA722020ALA331_B9H5AB0F ata-Hitachi_HUA722020ALA331_YAJSZSDZ

So create it, and then make sure the pool exists after creating it.

zpool list
zpool list -v
zpool iostat
zpool iostat -v

Then check that proxmox's storage manager knows it exists:
pvesm zfsscan

If you have caching drive, like an ssd, add it now by device id:
zpool add storage cache ata-LITEONIT_LCM-128M3S_2.5__7mm_128GB_TW00RNVG550853135858 -f

enabling compression makes everything faster. This should really be enabled by default.
zfs set compression=on storage

Part 8) Configure iso Storage Directory in ZFS Pool

Theory: The idea is to create a nested zfs administered file system instances for each type of data, rather than manipulate the root of the pool. This will prevent creating recursion loops and inappropriate locking when sharing or mounting data, allow setting quotas, separate operating system data from user data and improve organization.

For this example, data will be separated into storage for virtual disks and storage for static data. Static data can then be organized using more subdirectories. Do note that containers can mount the static data directories directly from the Proxmox host, but virtual machines will need the static data be shared over NFS.

zfs create storage/share
zfs create storage/share/iso
zfs create storage/share/downloads
zfs set quota=1000G storage/share/downloads
zfs create storage/vmstorage
zfs create storage/vmstorage/limited
zfs set quota=1000G storage/vmstorage/limited
zfs list
zpool status
zpool iostat -v

This sets a 1 TB maximum size to the storage/vmstorage/limited filesystem.

After creating at least one nested filesystem (recommended), subfolders can be created normally. Alternatively, these can also all be zfs administered as well.

ls /storage
ls /storage/share
mkdir storage/share/Software
mkdir storage/share/Backups
mkdir storage/share/Projects
mkdir storage/share/junk
ls /storage/share

Containers are created from templates. The templates have been downloaded locally. Proxmox needs them available server-side. One solution to this quandary is to add /storage/share/iso as iso and container type storage and upload the templates to that folder so Proxmox can use them.

Back in GUI land...

Click on "Datacenter"
"Storage"
"Add"
"Directory"
ID: iso
Directory: /storage/share/iso
Content: make sure only "ISO image" and "Container template" are selected.
"Add"

And again...
"Add"
"ZFS"
ID: vmstorage
ZFS Pool: /storage/vmstorage
gui-add-vmstorage.png
"Add"

And again...
"Add"
"ZFS"
ID: vmstoragelimited
ZFS Pool: /storage/vmstorage/limited
"Add"

Click on "iso" under the server's name.

gui-iso-storageSelected.png

"Content"
"Upload"
Content: ISO image
Select File...

proxmox-gui-iso-uploadBox.png

Upload ubuntu-16.04.2-server-amd64.iso

"Upload"
Content: Container template
Upload ubuntu-16.04-standard_16.04-1_amd64.tar.gz

Repeat the above steps for any additional ISO files and containers.

proxmox-gui-iso-upload-finished.png

Part 9) Configure Samba/ZFS SMB

Theory: So should Proxmox share the static data directory natively using samba/zfs? Or should the folder be mounted into a container and then shared from within the container? This tutorial will cover native SMB.

For the root share, /storage/share, SMB can be configured on the native proxmox server using either samba or with zfs. This tutorial will cover samba.

Useful Documentation:
How to Create a Network Share Via Samba
Theory: SMB, CIFS, Samba, Windows File Sharing notes

Official Documentation:
wiki.archlinux.org/index.php/Samba
help.ubuntu.com/community/Samba
www.samba.org/samba/docs/man/Samba-HOWTO-Collection

On the root proxmox server:

apt-get update
apt-get install samba

add root as a samba user and create a password
smbpasswd

It would also be nice to not have to connect as root to the server every time.
Lets create a new user and give them samba permissions.

To create a new Unix user:
useradd -m user
passwd user

This adds the new user to Samba.
smbpasswd -a user

nano /etc/samba/smb.conf

Edit the following:

server role = standalone server
create mask = 0777
directory mask= 0777
[share]
comment = root share
browseable = yes
path = /storage/share
guest ok = no
read only = no

Comment out the other shares before writing out.
Note that 0777 permissions is more for home shares that need to be accessed by Windows and multiple users/appliactions (rTorrent). For dedicated seedboxes, use 0755 or better yet, do not use samba (smbd) configured this way.

service smbd stop
service smbd start

Test for errors.
testparm

Part 10) Connect to ZFS Share

Theory: So ZFS exists, and samba works, now to connect to it using a desktop OS.

In Windows cmd.exe...

Windows Key + R
cmd

windowskey-r-run.png

Syntax:
net use /persistent:yes
net use S: \\\192.168.0.50\share /u:user1
or
net use /persistent:yes
net use S: \\\192.168.0.50\share /u:user1 Password1
Note: Use 2 \ not 3.

Instead of using cmd.exe, it is also possible to "Map" a "Network Drive" using the GUI.

mapNetworkDrive.png

The robocopy parameter /copy:dat is default, set to /copy:dt disable attribute copying for Linux file systems.
robocopy C:\Junk S:\Junk /mir /copy:dt

CIFS and SMB Timeouts in Windows

Letting Windows wait a while longer for the Samba servers to respond with their network shares information can increase stability, especially for low-spec Samba servers.

Windows Key + R
regedit

regedit.png

\HKLM\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters

Have fun tweaking.

Hint: Create a SessTimeout Dword under the above key and give it a value of 105 (decimal) to increase the timeout from 45 seconds to 105 seconds.

Part 11) Create Container/VM

Container config files are at: /etc/pve/lxc/100.conf. pve.proxmox.com/wiki/Linux_Container Just fyi.

Configuration choices must now be made in order to know which containers/VMs to set up.

Theory: So, should the torrentserver run in a container with better performance or a virtual machine for better isolation?

  1. If it will be mostly a client and used to download stuff, then KVM is fine.
  2. If it will be publically accessible and used to seed stuff and/or have the seedbox webgui exposed, then that VM needs to be isolated from the entire network by connecting to its own virtual switch and only able to communicate through a secondary PFSense container/VM router that will restricts outbound traffic to only the local router's IP. Do not even think of exposing the seedbox management GUI. No. The Proxmox firewall may be an alternative to PFSense.

This tutorial will cover the first scenario only.

Click on Create CT.
Hostname: seedbox
Password: Password1

container-seedbox2.png

Next.

storage:iso
template: ubuntu 16.04 Note: Use this exact version. rTorrent and ruTorrent can be quite picky.

container-template-seedbox-ubuntu16.04.png

Next.

Storage: vmstoragelimited
DiskSize: 20GB

container-rootdisk.png

Next.

Cores: 1 or 2

container-cpucores.png

Next.

Memory: 512-1024MB

container-memory.png

Next.

Nextwork:
IPv4/CIDR: 192.168.0.49/24 - Change as appropriate to your network.
Gateway: 192.168.0.1 - Change as appropriate to your network.
container-networking.png
Next.

DNS domain: use host settings
Next.

Confirm.

container-confirm.png

Finish.

Wait for the task to complete.

container-taskok.png

At TASK OK, close the dialogue box.

The container now exists, but has not been started. It would be nice if the downloads went into the downloads folder. For that to happen the /storage/share/downloads directory needs to be made available to the container prior to starting it. And to avoid possible software conflicts in mounting it, it would be preferable to install the OS prior to mounting it.

And so, comes time to power on and configure the container for the sole purpose of shutting it off again.

Part 12) Install and Configure Container OS

Theory: Only one user exists for the system currently, root, and Ubuntu does not allow SSH (remote) logins for root by default. Lets create a new user so SSH access is possible without changing Ubuntu's configuration since Putty's SSH is more user friendly than the WebGUI VNC thing that shows a CLI...

Click Start
Click Console

As feared by normies... server-level GUIs only exist to start CLIs....

seedbox-gui-console.png

username: root
Password: Password1

Update Ubuntu first. This will take some time.
apt-get update
apt-get upgrade -y

And then create a new user.

useradd -m user
passwd user
Password: Password1

And then back to Putty.

container-putty-start.png

When logged in as user via Putty/SSH, it is not possible to do normal tasks. To get anything done, it is necessary to be root. To become another user with most linux shells use su [username] or su - for root.

su -
Password: Password1

container-ssh-becomeRoot.png

Seedbox Theory:
The lowest resource utilization (read: excellent performance) seedboxes use rTorrent with the ruTorrent web interface over FCGI (e.g. PHP) on Apache or nginx. Most mid-range seedboxes with good performance (read: human configurable) typically use Transmission or Deluge instead. Lower performance, and very user friendly, seedboxes typically use uTorrent v2.2.1 (Windows only) or qBitTorrent (cross platform). While other applications may support the BitTorrent protocol, they are not appropriate for seedboxes. Except for rTorrent, modern seedbox quality BitTorrent clients have integrated web interfaces that offer basic functionality.

For the purposes of this tutorial, Quickbox software will be used. It is essentially a well supported push-button style installer script for rTorrent/ruTorrent hosted on Github for Debian/Ubuntu. The notable features are that it actually works, unlike literally everything else, including manual setup. An honorable mention is that while it cannot run natively, the docker version of rTorrent looks promising, but non-native means it would be better to just use Transmission instead.

In case Quickbox dies at some future date, the landscape's alternatives should now be clear.

Quickbox Resources:

Further Reading:


Quickbox Command Reference:


  • fixhome - Quickly adjust /home directory permissions.
  • showspace - Shows amount of space used by each user.

  • createSeedboxUser - Creates a shelled seedbox user.
  • deleteSeedboxUser - Deletes a created seedbox user and their directories (permanent).
  • changeUserpass - Change users SSH/FTP/deluge/ruTorrent password.
  • setdisk - Set your disk quota for any given user (must be implemented seperately).

  • upgradeDeluge - Upgrades deluge when new version is available.
  • upgradeBTSync - Upgrades btsync when new version is available.
  • upgradePlex - Upgrades Plex when new version is available.
  • upgradeJacket - Upgrades Jacket when new version is available.
  • upgradepyLoad - Upgrades pyLoad when new version is available.
  • setup-pyLoad - installs pyLoad
  • quickVPN - Something about VPNs.
  • removepackage-cron - upgrades your system to make use of systemd + (must be on Ubuntu 15.10+ or Debian 8)
  • clean_mem - flushes servers physical memory cache (helps avoid swap overflow)

Of note here is that Quickbox does not support multi-user configurations on a vanilla install. Care must be taken to manually ensure each rTorrent session is unique, with unique FCGI ports, and to set up disk quotas properly. A lazier alternative is to use containers/VMs to support multiple users and implement quotas using ZFS instead.

As root in the seedbox container...

whoami
cd ~
apt-get -yqq update; apt-get -yqq upgrade; apt-get -yqq install git lsb-release
git clone https://github.com/QuickBox/QB /etc/QuickBox
bash /etc/QuickBox/setup/quickbox-setup

The Quickbox Installation should begin. Y to log installation progress.

quickbox-install-log.png

Enter seedbox or similar for the hostname

quickbox-install-hostname.png

N to disable quotas. This feature needs to be manually configured to work. When using a NAS using Proxmox/ZFS, it makes more sense to manage quotas using ZFS filesystems and install multiple instances of Quickbox.

quickbox-install-DisableQuotas.png

Press the ENTER key on the keyboard to continue with the rest of the configuration options.

quickbox-install-quotas2.png

The 10GB question is for TCP optimizations on internet facing on high speed seedboxes natively on the internet. Enter 'N`.

quickbox-install-10gb.png

Enter 1 to use the latest version of rTorrent.

quickbox-install-rtorrent.png

Enter 4 to not install Deluge. It can be installed from the GUI later if needed.

quickbox-install-deluge.png

Pick a theme.

quickbox-install-theme.png

This is the main non-root user for connecting to and managing the seedbox.
Add the following:
Username: user
Password: Password1
Change as appropriate.

quickbox-install-sudoers-password.png

y to install ffmpeg. Might as well do it now.

quickbox-install-ffmpeg.png

Just press Enter. Although, if you might want to use it later, go ahead and type it in. Regardless, it will not be publically accessible without port forwarding, either manually or via uPnP.

quickbox-install-ftpIP.png

Important! Enter n to not block public trackers. For commercial seedboxes, it sometimes makes sense to block them.

quickbox-install-blockPublicTrackers.png

And...leave and come back in 30 min. (yes really)

quickbox-install-ecosystemStart.png

Eventually, reboot when it says to.

quickbox-install-reboot.png

Putty will drop connection as the container restarts.

Now to do the initial configuration and fix all the broken things!

Open a web browser:
Address: https://192.168.0.49

Ignore the cert warning.

quickbox-gui-login.png

Username: user
Password: Password1

The management gui console for the seedbox should appear.
Scroll down to the 'Service Control Center'.

rTorrent sometimes has a red dot next to it and is missing from the navigation pane. Let's fix that. If rTorrent exists, skip this next section down to the part where SSH gets fixed.

quickbox-gui-rtorrent.png

Open Putty.

SSH Address: 192.168.0.49
Username: user
Password: Password1

Oh no! SSH is broken too!

quickbox-fix-ssh-broken.png

Lets fix this using the CLI. The Proxmox CLI using pct enter 100 works only for containers. The Proxmox webGUI CLI VNC thing will also work with VMs. Open the proxmox webGUI CPI or CLI over Putty/SSH.

https://192.168.0.50:8006
Click on 100 (seedbox) under the server name.
Console

quickbox-proxmox-gui-login.png
Username: root
Password: Password1

Fix rTorrent.
apt-get install rtorrent -y

quickbox-fix-rtorrent.png

I suppose SSH could be left disabled, but a Linux box without SSH is like Windows without a GUI: it just feels wrong somehow.

The idea here is to find the ssh daemon and have it autostart, the lazy way, when the computer boots.

which sshd
sshd --help
#The ssh daemon must always be started using an absolute path.
/usr/sbin/sshd -p 22

SSH now works and now to make sure to always starts on boot. Crontab is the linux per-user scheduler.

crontab -e
@reboot /usr/sbin/sshd -p 22
CTRL + o
CTRL + x

quickbox-fix-sshd.png

And back to the seedbox management GUI:
Address: https://192.168.0.49

It is still broken.

quickbox-gui-rtorrent.png

So restart that service daemon. Stop it by clicking on Enabled. Wait for the page to reload.

quickbox-gui-rtorent-disabled.png

After the page reloads, click it again to enable it. It should now be fixed.

quickbox-gui-rtorrent-fixed.png

And also ruTorrent should now appear in the navigation pane. Click

quickbox-gui-rtorrent-navpane.png
Note: In the above picture, the ? tab displays the list of common Quickbox commands for managing the seedbox, including fixhome for permissions. That is not ominous at all about future permissions issues.

Feel free to configure rTorrent using the ruTorrent GUI now. Click on the "Gear" icon at the top.

quickbox-rutorrent-config.png

Configuration changes made to rTorrent using the ruTorrent GUI are temporary and do not presist across reboots.

rTorrent/ruTorrent are not currently utilizing the storage/share/downloads ZFS directory. Let's fix that.

Part 13) Share ZFS Mount Point(s) with Container

Container config files are at: /etc/pve/lxc/100.conf. pve.proxmox.com/wiki/Linux_Container Just fyi.

Theory: The idea is to make sure downloads from rTorrent go into the ZFS directory made to hold them, not into the container itself which has very limited storage.

So where does the ZFS directory /storage/share/downloads get mounted to?

This is what the directory structure looks like prior to downloading stuffs:

seedbox-home-directory-preDownload.png

So all of the files exist under /home/user.

And after downloading something:

seedbox-home-directory-postDownload.png

A directory is created under /home/user/torrents/rtorrent for every new multi-part torrent for the files to get placed into. The /home/user/rwatch should probably be used as the watch directory and the .torrent meta-files disapear into the void. Interesting. So maybe /home/user/torrents/downloads should be a good mount directory.

And in Proxmox CLI world... (ssh://192.168.0.50:22)

Username: root
Password: Password1

First, set the machine to start automatically on Proxmox reboots:
pct set 100 -onboot 1

It is possible to configure up to 10 mount points per container: mp0 to mp9 loosely falling into any of these 3 categories:

  • 1) Proxmox VE storage subsystem managed Storage Backed Mount Points (3-subtypes):
    • Image based: these are raw images containing a single ext4 formatted file system.
    • ZFS subvolumes: these are technically bind mounts, but with managed storage, and thus allow resizing and snapshotting.
    • Directories: passing size=0 triggers a special case where instead of a raw image a directory is created.
  • 2) Bind Mount Points
    • Bind mounts allow you to access arbitrary directories from your Proxmox VE host inside a container.
    • Not managed by Proxmox VE storage subsystem.
  • 3) Device Mount Points
    • Device mount points allow to mount block devices of the host directly into the container.
    • Unmanaged but quota and acl options will be honoured.

The following uses the Bind Mount Points technique to share Proxmox path /storage/share/downloads with the container as /mnt/downloads.

pct shutdown 100
pct status 100
pct set 100 -mp0 /storage/share/downloads,mp=/home/user/torrents/downloads
;Use ro=1 Or for a read-only mount point.
pct set 100 -mp1 /storage/share/junk,mp=/home/user/junk,ro=1
;Mount the iso one randomly.
pct set 100 -mp2 /storage/share/iso,mp=/home/usr/iso,ro=1

And time to start it! (again)
pct start 100

If it does not start correctly, there was likely a syntax error in the mounting commands above. Double check the paths.
pct status 100

Double check everything was mounted correctly.

pct enter 100
ls /home
ls -R /home
exit

Part 14) Start rTorrent/ruTorrent configuration

Theory: The idea is to configure rTorrent to use the correct directory for downloads and other misc settings.

Change the following setting:
From:
/home/user/torrents/rtorrent
To:
/home/user/torrents/downloads

ruTorrentSettings.png

Unfortunately, the above setting gets reverted after reboots. In order for changes to persistent, rTorrent must be configured using the /home/user.rtorrent.rc text file.

Also: The permissions will probably be wrong, so update them.

cd /home/user/torrents
ls -l
chmod 0777 downloads

ruTorrent-permissions.png

. in front of a file in Linux means it is a "hidden" file and instructs interfaces to not show the file when listing directory contents.

  • To show hidden files in Linux use ls -a.
  • To view permissions use ls -l.
  • To view all files, including hidden files and related permissions, use ls -la.

Guides and examples:

nano /home/user.rtorrent.rc

Command line text editors are terrible. A better way to edit them is to use Notepad++, using UNIX/OSX line endings (important), and either transfer the file by pasting the contents into nano over putty, or transfer the raw config files around using a temporary HTTP server on both the Linux and Windows sides.

Seedbox CLI Example:

cd /home/user
python --version
python -v
CTRL + d
python -V
python -m SimpleHTTPServer 79
; If it does not work, use a port higher than 1024 (permissions issue) or a different port (conflict issue).

On Windows:

http://192.168.0.49:79
Download .rtorrent.rc.
Edit in Notepad++ according to the Guides and examples above.
Install Python 3 x86-64.
Windows key + r
cmd.exe
python -V
python -m http.server 234

On Seedbox:

CTRL + c
cd /home/user
rm .rtorrent.rc
aria2c http://192.168.0.200:234/.rtorrent.rc
; fix owner
chown user:user /home/user/.rtorrent.rc`

Windows has working graphical text editors, did not require fixing permissions more than once and is not case-sensitive. Interesting.

Have fun configuring rTorrent!

Further tasks:

Notes:

  • In the seedbox's management CLI, showspace will not show correct usage for user since it will be counting all mounted directories, including those that are read only. In the webGUI, Your Disk Status will also not be accurate as per reporting df -h numbers. So maybe, showspace - readOnly mounts ~= usage?
  • In order to safely expose ruTorrent, do a VM install of Ubuntu 16.04, instead of container, and use another PFSense VM, or the Proxmox firewall rules, to restrict access to just the router.
  • Due to the numerous revisions and debugging involved in creating this guide, the Proxmox IP (192.168.0.50, 192.168.0.49) and Seedbox IP (192.168.0.49, 192.168.0.48) in the guide overlap. They should be different and static in a real setup.

7 comments:

  1. this is awesome , I was looking for something like this tutorial for a long time.
    can I ask you for an opinion?
    given the specs below, what would be the best setup option for me to have a nice, easy to manage file server and Proxmox hypervisor on that.
    my plan is to use ZFS raid-1 on SSDs for OS (yes I know it may be overkill but that was the original config and I think I will keep it that way.)
    I plan to use the 2x1TB as an additional ZFS raid-1 pool for VM and local storage.
    the rest I planned to use BTRFS and somehow expose them to the world but maybe ZFS is an option too. I do not understand ZFS 100% but your tutorial helps.
    my only issue now is that I have a bunch of drives that are different and ZFS does not like mix/match pools. what would you propose I set this up like?


    I have an oldish SuperMicro AMD server
    Chassis: Supermicro SC846 24 Bay chassis
    Motherboard: Supermicro H8DME-2 BIOS v3.5 (latest)
    CPU: 2 AMD Opteron Hex Core 2431 @ 2.4Ghz for total of 12 cores
    RAM: 49GB DDR2 PC-5300f @ 667mHz ECC
    4x1Gb NICs.

    I have 2x120Gb SSD for OS
    2x1TB HDD
    3 or 4x3TB HDD
    3 or 4x2TB HDD


    thanks Vl.

    ReplyDelete
  2. Try posting your question on https://forum.level1techs.com/

    Here is the duplicate thread: https://forum.level1techs.com/t/how-to-create-a-nas-using-zfs-and-proxmox-with-pictures/117375

    You will get more opinions on things that way, instead of just mine. Either create a new thread (preferred) or post it as a question in the existing one.

    ReplyDelete
  3. ok, I will do that.
    it's just your thread is hitting very close to home, so I was interested in your outtake more than anything else, since it seams like you are building what I am thinking of doing with slight variation.
    but I will open a new thread for this, sure.

    ReplyDelete
  4. done https://forum.level1techs.com/t/help-needed-proxmox-file-server-config-options/119261

    ReplyDelete
  5. Thank you very much for a very helpful guide. Love it.

    ReplyDelete
  6. One useful shortcut to getting the correct disk by ID, is to create the pool with the regular /dev/sdx notation, but immediately export it, and then re-import it with:
    zpool import -d /dev/disk/by-id sametankpool
    try it!

    ReplyDelete
  7. This was an awesome write-up, I am wondering if you are still using this server in this configuration? I followed the Proxmox part of this guide several years ago when I set up my server and it is still running great. I wonder if you would do things any differently now?

    ReplyDelete