Migrating from Synology NAS to Proxmox: A Step-by-Step Guide
Learn how to successfully migrate your Synology NAS disks to a Proxmox virtualization server with this detailed walkthrough covering RAID recovery, volume mounting, and VM restoration.
This weekend I retired my old Synology DS216 NAS and migrated its data to my Proxmox virtualization server. What I initially thought would be a straightforward disk transfer turned into an educational journey through software RAID, LVM management, and virtual machine restoration. This guide documents the process and challenges I encountered, which might help if you're planning a similar migration.
Hardware Background
My starting point was a simple home setup that needed an upgrade:
-
Old System: Synology DS216 NAS
- Two 2TB disks in RAID configuration
- Standard software RAID and LVM setup
-
Target System: Fujitsu D556/E85+ running Proxmox
- Previously had two 500GB disks (one failed)
- Added a new 1TB SSD as replacement
Preparing the Proxmox Server
Before migrating the Synology disks, I needed to reinstall Proxmox with a clean setup. The server's boot disk was compromised, making a fresh installation the best approach.
The preparation process involved:
- Installing the latest Proxmox version on a new M.2 SSD
- Setting up the boot environment properly
- 1TB SSD that contained my old VM data that needed to be reimported later
First, I had to handle my existing ZFS pool issue:
# Import the existing ZFS pool that wasn't automatically recognized
zpool import -f rpool
Importing the Synology Disks
After physically installing the two 2TB disks from the Synology into my Proxmox server, I needed to reassemble the RAID array and mount the logical volumes. This was less straightforward than expected.
First, I installed the necessary tools and reassembled the Synology's RAID array:
# Install required packages
apt-get install -y mdadm lvm2
# Scan for and assemble RAID arrays
mdadm --assemble --scan
# Activate all available volume groups
vgchange -ay
# Create mount point
mkdir -p /mnt/synology_data
# Check physical volumes, volume groups, and logical volumes
pvs
vgs
lvs
# Mount the logical volume
mount /dev/vg1000/lv /mnt/synology_data
# Add entry to fstab for persistent mounting
echo "UUID=e8e8b4a3-487a-46fa-b796-19efce2ab18e /mnt/synology_data ext4 defaults 0 2" >> /etc/fstab
Restoring Virtual Machines
Since I hadn't made backups of my VMs, I had to manually restore them using the disk images still available in the ZFS pool.
Restoring Regular VMs
For standard VMs, I used this approach:
- Create a new VM with the same ID as the original
- Skip adding any disk image during creation
- Rescan to detect the existing disk images
- Attach the found disk to the VM
# Rescan to detect existing disk images
qm rescan
Restoring LXC Containers
LXC containers required a more complex approach using ZFS volume manipulation:
# List all ZFS filesystems to locate the container volumes
zfs list -t filesystem -o name,mountpoint
# Rename existing volume to avoid conflicts
zfs rename rpool/data/subvol-106-disk-1 rpool/data/subvol-106-disk-1.new
# Create new container with same ID, then rename volumes back
zfs rename rpool/data/subvol-106-disk-0.old rpool/data/subvol-106-disk-1
Proxmox ZFS Command Reference
When working with Proxmox and ZFS storage, several commands are particularly useful for maintenance and troubleshooting. Here's a reference of the most common operations:
Basic ZFS Pool Management
# List all ZFS pools
zpool list
# Get detailed status of ZFS pools
zpool status
# Import a pool that isn't automatically recognized
zpool import -f poolname
# Export a pool (safely unmount)
zpool export poolname
# Scrub a pool (verify and repair if possible)
zpool scrub poolname
# Check pool health
zpool status -v poolname
ZFS Dataset Operations
# List all ZFS filesystems/datasets
zfs list
# List snapshots
zfs list -t snapshot
# Create a snapshot
zfs snapshot poolname/dataset@snapshotname
# Rollback to a snapshot
zfs rollback poolname/dataset@snapshotname
# Clone a snapshot
zfs clone poolname/dataset@snapshotname poolname/newdataset
Proxmox-specific ZFS Operations
# List ZFS volumes used by Proxmox
zfs list -r rpool/data
# Check space usage in Proxmox ZFS storage
zfs get used,avail,refer rpool
# Find all VM disk images in ZFS
zfs list -r rpool/data -o name -t volume | grep vm-
# Find all container volumes
zfs list -t filesystem | grep subvol-
Proxmox Directory Structure
Understanding where Proxmox stores its configuration and data files is essential for effective administration and recovery operations:
Key Proxmox Directories
-
/etc/pve/ - Main Proxmox VE configuration directory
- qemu-server/ - VM configurations (one file per VM, named by VMID)
- lxc/ - LXC container configurations
- storage.cfg - Storage configuration
- user.cfg - User authentication configuration
-
/var/lib/pve/ - Data directory for Proxmox
- cluster/ - Cluster configuration
- storage/ - Additional storage configuration
-
/var/log/pve/ - Proxmox log files
- tasks/ - Task logs (backups, migrations, etc.)
-
Storage locations depend on configured storage types:
- Default ZFS datasets: rpool/data/
- Default directory storage: /var/lib/vz/
- /var/lib/vz/images/ - VM disk images for directory storage
- /var/lib/vz/template/ - Container templates
- /var/lib/vz/dump/ - Default backup location
VM and Container File Locations
-
VM disk images on ZFS:
- ZFS volumes: rpool/data/vm-[VMID]-disk-[N]
-
LXC containers on ZFS:
- ZFS datasets: rpool/data/subvol-[VMID]-disk-[N]
-
VM configuration files:
- /etc/pve/qemu-server/[VMID].conf
-
LXC configuration files:
- /etc/pve/lxc/[VMID].conf
Understanding these locations helped me tremendously during the recovery process, especially when manually restoring VMs and containers after the system reinstallation.
Sharing Synology Data with VMs
To make the migrated Synology data available to my VMs, particularly the Docker VM that runs most of my services, I used the 9p virtio filesystem passthrough feature.
Setting Up 9p Virtio Passthrough
First, I edited the VM's configuration file:
# Edit the VM config file
nano /etc/pve/qemu-server/<vmid>.conf
Added the following line at the top of the file:
args: -virtfs local,id=faststore9p,path=/rpool/faststore,security_model=passthrough,mount_tag=faststore9p
Note on naming: For both the
id
andmount_tag
parameters, you should choose a unique, descriptive name that helps identify the shared filesystem. In this example, I usedfaststore9p
for both values, where "faststore" describes the purpose of the share and "9p" indicates the protocol. Theid
parameter is used internally by QEMU, while themount_tag
must match exactly what you'll use in the VM's fstab entry.
Then, inside the VM, I edited the fstab file to mount this filesystem at boot:
# Add to VM's /etc/fstab
faststore9p /faststore 9p trans=virtio,rw,_netdev 0 0
Lessons Learned
The migration process highlighted several important considerations for anyone planning a similar project:
- Always back up before migration: Had I created proper backups, restoring VMs would have been much simpler
- Document your existing setup: Understanding the original disk layout saved significant troubleshooting time
- Expect the unexpected: What seemed like a simple disk transfer required multiple techniques to fully restore
- Plan your boot environment: My original issue stemmed from not properly migrating the boot partition when updating the storage
Conclusion
Moving from a dedicated NAS to an all-in-one virtualization server offers great flexibility and consolidation benefits. While the migration process had its challenges, the end result is a more capable and flexible storage solution that's integrated with my virtualization environment.
The ability to directly transfer disks from a Synology NAS to a Linux-based system like Proxmox demonstrates the advantage of using standard software RAID and LVM technologies, as opposed to proprietary storage solutions that might lock in your data.