Assuming you're using the original size/type of disk that you created the image from in the first place, and you already know the block size, here is the command (modify to your needs):
# unpigz myDiskImage.dd.gz | dd bs=512k of=/dev/sdi
To know the progress of the restoration, us pipe viewer like below:
# pv myDiskImage.dd.gz | unpigz | dd bs=512k of=/dev/sdd
Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts
Tuesday, April 4, 2017
Tuesday, March 14, 2017
Creating an Image File for a Whole Hard Drive
In the course of my work, I often have a 2.5" SSD that needs to burned into an image for safe-keeping or later restoration. In my case, it's a 120GB drive that can be plugged into a simple SATA-USB adapter cable (powered by the USB bus). So, the hardest part is the software side... and it's really not that hard.
All you really need is the Linux 'dd' command and a compression utility. I've added a few minor things to make it faster and more informative.
First, ensure your distro has pigz (a gzip utility that can actually use multiple CPU cores for faster compression) and pv (pipe viewer, so you can monitor progress).
Then, plug in your drive, grab its device name from dmesg (I usually get /dev/sdi, so will use that in examples here), and run fdisk to get its size. That's optional, but nice to have for progress.
# dmesg
<get device id/name from output>
# fdisk -l /dev/sdi
<get size in bytes from output below, usually first line>
Disk /dev/sdi: 120.0 GB, 120034123776 bytes
Finally, issue the command to create an image file and plug in the size you got above to the pv portion of the command (of course, make sure you have enough space on disk for the compressed image, first!)...
# dd if=/dev/sdi | pv -s 120034123776 | pigz --fast > /path/to/new/imgFile.dd.gz
All you really need is the Linux 'dd' command and a compression utility. I've added a few minor things to make it faster and more informative.
First, ensure your distro has pigz (a gzip utility that can actually use multiple CPU cores for faster compression) and pv (pipe viewer, so you can monitor progress).
Then, plug in your drive, grab its device name from dmesg (I usually get /dev/sdi, so will use that in examples here), and run fdisk to get its size. That's optional, but nice to have for progress.
# dmesg
<get device id/name from output>
# fdisk -l /dev/sdi
<get size in bytes from output below, usually first line>
Disk /dev/sdi: 120.0 GB, 120034123776 bytes
Finally, issue the command to create an image file and plug in the size you got above to the pv portion of the command (of course, make sure you have enough space on disk for the compressed image, first!)...
# dd if=/dev/sdi | pv -s 120034123776 | pigz --fast > /path/to/new/imgFile.dd.gz
Wednesday, March 8, 2017
Mount a DD-created Disk Image
I have a lot of old disk images created with simple 'dd' commands. These images include the whole disk (all partitions) and are usually compressed. From time to time, I'll need to add files to those images or access data from them. Ideally, it's just quick to mount them to do that.
First, you need to decompress the image file. For gzip formats, I like to use pigz, since it's much faster on multi-core machines.
Now, you need to understand that you can only mount particular partitions. So, how do you access a particular partition in a single image file? You'll need to use a loopback offset.
Run fdisk to determine the partition table and get the data you'll need to calculate the offset.
255 heads,63 sectors/track,14593 cylinders,tot 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002beb4
Device Boot Start End Blocks Id System
IMG.dd1 * 2048 230336511 115167232 83 Linux
IMG.dd2 230338558 234440703 2051073 5 Extended
IMG.dd5 230338560 234440703 2051072 82 Linux swap
Now you need to calculate the offset. In my example above, I'm interested in partition #1, which fdisk reports to start at sector 2048. Since there are 512 bytes in each sector, as reported, multiply 512 X 2048 to get your offset value in bytes. In this case, it's 1048576. You'll use that number in the next step. Setup your loopback device offset, using the value you calculated above. I just used loop0, since it wasn't being used for anything on my system at the time. You should use whatever is available.
To unmount the file system, you'll need to unmount it like you normally would, but then you need to remove the loopback device.
First, you need to decompress the image file. For gzip formats, I like to use pigz, since it's much faster on multi-core machines.
Now, you need to understand that you can only mount particular partitions. So, how do you access a particular partition in a single image file? You'll need to use a loopback offset.
Run fdisk to determine the partition table and get the data you'll need to calculate the offset.
# fdisk -l /path/to/IMG.ddDisk IMG.dd: 120.0 GB, 120034123776 bytes
255 heads,63 sectors/track,14593 cylinders,tot 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002beb4
Device Boot Start End Blocks Id System
IMG.dd1 * 2048 230336511 115167232 83 Linux
IMG.dd2 230338558 234440703 2051073 5 Extended
IMG.dd5 230338560 234440703 2051072 82 Linux swap
# losetup -o 1048576 /dev/loop0 /path/to/IMG.ddMount the loopback device.
# mount /dev/loop0 /mnt/imgNow you can use your filesystem as with any other file system you'd mount!
To unmount the file system, you'll need to unmount it like you normally would, but then you need to remove the loopback device.
# umount /mnt/img# losetup -d /dev/loop0
Thursday, February 16, 2017
Fixing Linux Mint Cinnamon Hangs
Every once in a while, my GUI on Linux Mint hangs. The mouse moves, but that's it. Here's the only way I found to get things back up as they were (without losing all of your open windows and applications).
1. Get a terminal (Ctrl + Alt + F2 or similar).
2. Kill all cinnamon processes (killall -HUP cinnamon)
3. Go back to your GUI (Ctrl + Alt + F8).
After a few seconds or minutes, you'll get all your stuff back.
1. Get a terminal (Ctrl + Alt + F2 or similar).
2. Kill all cinnamon processes (killall -HUP cinnamon)
3. Go back to your GUI (Ctrl + Alt + F8).
After a few seconds or minutes, you'll get all your stuff back.
Thursday, February 9, 2017
Configure CentOS 6 VNC Server
One of the first things I like to do when provisioning a CentOS 6 server with a desktop GUI is to configure remote desktop; but the standard/included server (vino) only works when a user is already logged in. I want to usually be able to log-in remotely (in other words, have my VNC session log me in to a machine that has no users logged-in).
First, install and enable the VNC server:
See if VNC server comes up:
First, install and enable the VNC server:
# yum install tigervnc-serverSet VNC password for your user:
# yum install xorg-x11-fonts-Type1
# chkconfig vncserver on
# su - USERAdd the following lines to the following file:
# vncpasswd
# exit
# vim /etc/sysconfig/vncservers
VNCSERVERS="1:USER"Add port 5900 (TCP and UDP) to your firewall allow rules (I just did this graphically at the machine's physical terminal, but you can use iptables in the CLI).
VNCSERVERARGS[1]="-geometry 800x600"
See if VNC server comes up:
# service vncserver restartEdit the user's vnc configuration to replace the "twm &" line with the following:
# service vncserver stop
# vim /home/USER/.vnc/xstartup
exec gnome-session &Test VNC server startup again:
# service vncserver restartAt this point, you should be able to point a VNC client at your machine, and you will automatically be logged-in. Note, it usually takes a little time... up to a minute or so, typically.
Tuesday, December 6, 2016
Migrating VMware Workstation Guests to KVM/qemu
I recently had the need to get one of my VMware Workstation guests (running CentOS 6.4) migrated to a KVM/qemu hypervisor environment. The VM was in the typical multiple file vmdk configuration that VMware Workstation uses.
Note: I've read that if you're migrating a Windows guest or a SCSI-originated guest (or something to that extent), you may get a BSOD or some other boot error. So this may not work for every case.
In summary, the process basically involves converting that multiple file setup into a single file, and then moving that single file into your storage pool for your KVM setup, from which you then import it. Of course, the devil is in the details, so...
Step 1 - Convert VMware multiple file VM into a single file
(note: if you have snapshots, it's best to delete those to ensure you're getting the latest image)
(note: your paths will vary)
# vmware-vdiskmanager -r /home/vm/vmware/c64/c64_disk.vmdk" -t 0 /home/c64.vmdk
During this, I got an error ("VixDiskLib: Invalid configuration file parameter. Failed to read configuration file.") but I was able to continue on just fine. I assume this is about the VM's configuration, but since I was changing RAM amount and network stuff anyway, I didn't care.
Step 2 - Copy that single file to your storage pool / import location
(note: I'm doing it this way, as the next step is resource-intensive and my destination machine is faster - also, your paths will vary - also, I've setup my storage pool to not be the standard /var/lib/libvirt!)
# rsync -av /home/c64.vmdk [dest-machine]:/home/vm/c64/
Step 3 - Login to destination machine and convert the vmdk into a qemu-compatible format
(note: this will become your VM's main disk image file, so just put it in your pool where you plan on leaving it for the machine)
# qemu-img convert /home/vm/c64/c64.vmdk -O qcow2 /home/vm/c64/c64.img
Step 4 - Start virt-manager and import the VM
# virt-manager
- Create a new machine.
- Choose "Import existing disk image" option.
- Continue configuring it as necessary, and then boot it.
After all is said and done, you will need to tweak a few things (e.g. NIC/MAC, persistent net rules file, etc.), but you should basically be up and running.
Note: I've read that if you're migrating a Windows guest or a SCSI-originated guest (or something to that extent), you may get a BSOD or some other boot error. So this may not work for every case.
In summary, the process basically involves converting that multiple file setup into a single file, and then moving that single file into your storage pool for your KVM setup, from which you then import it. Of course, the devil is in the details, so...
Step 1 - Convert VMware multiple file VM into a single file
(note: if you have snapshots, it's best to delete those to ensure you're getting the latest image)
(note: your paths will vary)
# vmware-vdiskmanager -r /home/vm/vmware/c64/c64_disk.vmdk" -t 0 /home/c64.vmdk
During this, I got an error ("VixDiskLib: Invalid configuration file parameter. Failed to read configuration file.") but I was able to continue on just fine. I assume this is about the VM's configuration, but since I was changing RAM amount and network stuff anyway, I didn't care.
Step 2 - Copy that single file to your storage pool / import location
(note: I'm doing it this way, as the next step is resource-intensive and my destination machine is faster - also, your paths will vary - also, I've setup my storage pool to not be the standard /var/lib/libvirt!)
# rsync -av /home/c64.vmdk [dest-machine]:/home/vm/c64/
Step 3 - Login to destination machine and convert the vmdk into a qemu-compatible format
(note: this will become your VM's main disk image file, so just put it in your pool where you plan on leaving it for the machine)
# qemu-img convert /home/vm/c64/c64.vmdk -O qcow2 /home/vm/c64/c64.img
Step 4 - Start virt-manager and import the VM
# virt-manager
- Create a new machine.
- Choose "Import existing disk image" option.
- Continue configuring it as necessary, and then boot it.
After all is said and done, you will need to tweak a few things (e.g. NIC/MAC, persistent net rules file, etc.), but you should basically be up and running.
Thursday, December 1, 2016
Headless CentOS 7 Virtualization Host: Part 1 - Host Installation & Setup
Here is how I setup a headless (no monitor) VM host on CentOS 7.2
Ensure your host's CPU can support virtualization (the result should be > 0):
Finally, just verify your installation and configuration:
Ensure your host's CPU can support virtualization (the result should be > 0):
# egrep -c '(vmx|svm)' /proc/cpuinfoInstall the KVM hypervisor and its necessary packages (this is competitive with vmware):
# yum install kvm libvirt virt-install qemu-kvmMake sure the host's kernel is ready to perform NAT with the hypervisor for the VM guests. First we'll enable IP forwarding on the host:
# echo "net.ipv4.ip_forward = 1" | sudo tee /etc/sysctl.d/99-ipforward.confThen, we'll modify the network adapter's configuration to use a bridge adapter (in my case, I'm using eth1). Open the config file for editing, comment out everything IP-related (IP, gateway, DNS, etc.), and add a link to the bridge configuration file that we'll add next:
# sysctl -p /etc/sysctl.d/99-ipforward.conf
# vim /etc/sysconfig/network-scripts/ifcfg-eth1
BRIDGE=virbr0Next, we need to create a bridge adapter configuration file and add the following lines (your info will probably be same as what you commented out above):
# vim /etc/sysconfig/network-scripts/ifcfg-virbr0
DEVICE="virbr0"If possible, reboot the machine to ensure kernel modules and network settings load and initialize.
TYPE=BRIDGE
ONBOOT=yes
BOOTPROTO=static
IPADDR="[YOURS]"
NETMASK="[YOURS]"
GATEWAY="[YOURS]"
DNS1="[YOURS]"
Finally, just verify your installation and configuration:
# lsmod | grep kvm
# ip a show virbr0
# virsh -c qemu:///system list
Tuesday, November 29, 2016
Linux Partitions >2TB
For Linux hard drive partitions larger than 2TB, fdisk just doesn't cut it. You'll have to use parted, instead. But keep in mind that since this parted method uses GPT (part of EFI) instead of the old MBR type supported by most/all BIOS, you'll need to make sure your BIOS supports it (most newer ones should).
The example I'm using in my case was for a brand-new 3TB secondary storage device (no existing partitions), with the primary drive being for boot and of the MBR type on a smaller dedicated hard drive... hence "sdb" in the example below, instead of "sda."
First, verify the size of your drive:
First, create a GPT partition table:
The example I'm using in my case was for a brand-new 3TB secondary storage device (no existing partitions), with the primary drive being for boot and of the MBR type on a smaller dedicated hard drive... hence "sdb" in the example below, instead of "sda."
First, verify the size of your drive:
# fdisk -l /dev/sdbThen, get into the GNU parted shell:
# parted /dev/sdbOnce in the parted shell, issue the following commands...
First, create a GPT partition table:
(parted) mklabel gptSecond, tell parted you'd like to use TB as the unit:
(parted) unit TBThen, tell parted to create the 3TB partition:
(parted) mkpart primary 0.00TB 3.00TBYou may verify the partition table and quit:
(parted) print
(parted) quitAfter that, all you have to do is create your file system and mount the drive.
Friday, October 28, 2016
Safely Enlarging a Linux /home Partition
I recently cloned a 1TB server hard drive onto a 1.2TB disk, using Clonezilla. This resulted in 0.2TB being unutilized, so I had to enlarge it. But how to do so without losing your data? Something like this (in my case, I needed to enlarge /dev/sda3, a primary partition):
After noting the starting cylinder, delete the third partition.
Re-create the third partition. You MUST make sure it starts with the cylinder noted above! In my case, this was all just defaults. When done, just write the table to disk and reboot.
Next, you'll use the resize command in Linux to expand the file system to match the partition it's on.
Before (df -h):
/dev/sda3 865G 7.2G 814G 1% /home
After (df -h):
/dev/sda3 1.1T 7.2G 988G 1% /home
# fdisk /dev/sdaPrint the current partition table (so you can note the starting cylinder of the partition you're going to enlarge).
After noting the starting cylinder, delete the third partition.
Re-create the third partition. You MUST make sure it starts with the cylinder noted above! In my case, this was all just defaults. When done, just write the table to disk and reboot.
Next, you'll use the resize command in Linux to expand the file system to match the partition it's on.
# resize2fs /dev/sda3Here is what I was presented with, information-wise. The process took merely about a minute.
resize2fs 1.41.12 (17-May-2010)Filesystem at /dev/sda3 is mounted on /home; on-line resizing requiredold desc_blocks = 55, new_desc_blocks = 67Performing an on-line resize of /dev/sda3 to 279054750 (4k) blocks.The filesystem on /dev/sda3 is now 279054750 blocks long.Very painless!
Before (df -h):
/dev/sda3 865G 7.2G 814G 1% /home
After (df -h):
/dev/sda3 1.1T 7.2G 988G 1% /home
Thursday, August 18, 2016
Mount Linux NFS Share on a Mac
So apparently, you cannot just mount "-t nfs" a Linux NFS share on Mac OS X. Credit for this find goes to: http://www.cyberciti.biz/faq/apple-mac-osx-nfs-mount-command-tutorial/
Make sure your /etc/exports are right and then check...
Make sure your /etc/exports are right and then check...
$ showmount -e [nfs-server]If all is good, mount with the following command (omit the ",rw" if desired)...
$ sudo mount -t nfs -o resvport,rw [nfs-server]:/[exported-dir] [mount-point]
Wednesday, February 24, 2016
Detailed USB Device Information from the CLI
I recently needed to get a model number of a Logitech webcam, but it wasn't printed on the device. So I found a neat way to get it from the device itself.
First, run lsusb to list the USB devices on the system. You'll get something like the following:
First, run lsusb to list the USB devices on the system. You'll get something like the following:
# lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubThen, run it with the bus and device numbers of the device you want information for... for example, in my case of the Logitech webcam (there's a lot more to it, but I truncated to fit here ~ you may want to output it to a file):
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 003: ID 046d:082d Logitech, Inc.
Bus 006 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 007 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 006 Device 002: ID 2109:0811
# lsusb -D /dev/bus/usb/001/003
Device: ID 046d:082d Logitech, Inc.
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 239 Miscellaneous Device
bDeviceSubClass 2 ?
bDeviceProtocol 1 Interface Association
bMaxPacketSize0 64
idVendor 0x046d Logitech, Inc.
idProduct 0x082d
bcdDevice 0.11
iManufacturer 0
iProduct 2 HD Pro Webcam C920
iSerial 1 2D62DFDF
bNumConfigurations 1
Wednesday, January 27, 2016
Mounting CentOS 6 NFS on a CentOS 4 Client
I have a CentOS 6 machine that exports NFS shares for various virtual machines to mount and use as needed (mostly to build code on various platforms and architectures).
One of the problem machines is a CentOS 4 VM that mounts the CentOS 6 NFS share upon boot. I fought with this damn VM for an hour before the simple solution hit me. CentOS 4 uses an older version of NFS (nfs?) than CentOS 6 does by default (nfs4).
Here is the relevant line in the CentOS 6 NFS server machine's /etc/exports file:
/home/sm *(ro,insecure,no_root_squash)
Here is the CentOS 4 NFS client VM's /etc/fstab file (the one that does not work):
mndevtest:/home/sm /mnt/smhome nfs defaults 0 0
And here is the CentOS 4 NFS client VM's /etc/fstab file (the one that does work):
mndevtest:/home/sm /mnt/smhome nfs4 defaults 0 0
Literally a ONE CHARACTER change fixed it. I love computers. /s
One of the problem machines is a CentOS 4 VM that mounts the CentOS 6 NFS share upon boot. I fought with this damn VM for an hour before the simple solution hit me. CentOS 4 uses an older version of NFS (nfs?) than CentOS 6 does by default (nfs4).
Here is the relevant line in the CentOS 6 NFS server machine's /etc/exports file:
/home/sm *(ro,insecure,no_root_squash)
Here is the CentOS 4 NFS client VM's /etc/fstab file (the one that does not work):
mndevtest:/home/sm /mnt/smhome nfs defaults 0 0
And here is the CentOS 4 NFS client VM's /etc/fstab file (the one that does work):
mndevtest:/home/sm /mnt/smhome nfs4 defaults 0 0
Literally a ONE CHARACTER change fixed it. I love computers. /s
Monday, January 11, 2016
Fixing the Reboot Command
Somewhere along the line, my Ubuntu 14.04.3 installation (on a Zotac Z-Box) stopped accepting any typed reboot command. It seems to have happened after I removed gnome-power-manager; but even though I reinstalled it, the reboot command still wouldn't work. I'm not sure whether that was the cause, but I think it has a high chance. Anyway, thanks to http://michalorman.com/2013/10/fix-ubuntu-freeze-during-restart/, this little trick worked...
Edit grub's default configuration:
# vim /etc/default/grub
Add the following to the GRUB_CMDLINE_LINUX_DEFAULT string:
reboot=warm,cold,bios,smp,triple,kbd,acpi,efi,pci,force
The kernel will attempt each type of reboot in that order, until one works.
After saving that file, be sure to run the following command to enact your changes:
# update-grub
Reboot your machine, and the reboot command should now work.
Edit grub's default configuration:
# vim /etc/default/grub
Add the following to the GRUB_CMDLINE_LINUX_DEFAULT string:
reboot=warm,cold,bios,smp,triple,kbd,acpi,efi,pci,force
The kernel will attempt each type of reboot in that order, until one works.
After saving that file, be sure to run the following command to enact your changes:
# update-grub
Reboot your machine, and the reboot command should now work.
Tuesday, December 29, 2015
Installing Older or Multiple GCC Versions on Ubuntu 14.04 Trusty
In trying to port my company's software from CentOS 6 to Ubuntu 14.04, it became apparent that I may need to modify the Ubuntu build environment (don't ask... much easier than changing the 15 years worth of software... trust me). Just to give you some idea, the software began in the early 90s with GCC v2 on 32 bit platforms (target was CentOS 4). It then evolved to CentOS 5 and then 6, all the way up though GCC v3.4. To support running our software on the latest hardware, Ubuntu 14.04 LTS release became our desired target.
Ubuntu 14.04 comes with GCC v4.8. The last version of Ubuntu to include GCCv3.4 was Hardy. So we'll need to include that in our repo source list. The problem here was that all the posts I found referenced incorrect old-releases addresses, so here is what I found that worked...
Note, if you haven't done so already, be sure to install the build stuff:
Ubuntu 14.04 comes with GCC v4.8. The last version of Ubuntu to include GCCv3.4 was Hardy. So we'll need to include that in our repo source list. The problem here was that all the posts I found referenced incorrect old-releases addresses, so here is what I found that worked...
Note, if you haven't done so already, be sure to install the build stuff:
apt-get install build-essentialFirst add the old repos to the sources file, so apt can find what we need:
vim /etc/apt/sources.list
deb http://old-releases.ubuntu.com/ubuntu hardy universe
deb-src http://old-releases.ubuntu.com/ubuntu hardy universeThen get the latest repo data:
apt-get updateThen install the older GCC:
apt-get install gcc-3.4
apt-get install g++-3.4Next, we'll need to configure multiple compilers (note, the last number is just an arbitrary priority):
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 20
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-3.4 10After that, just select which compiler you want to use, before you go to compile:
update-alternatives --config gccI also found that I had to update the linker's symbolic link, as it was pointing to a non-existent library file (/lib/libgcc_s.so.1):
ln -sfn /lib/x86_64-linux-gnu/libgcc_s.so.1 /usr/lib/gcc/x86_64-linux-gnu/3.4.6/libgcc_s.so
Tuesday, July 14, 2015
Fix Google Chrome Crashes by Increasing Open Files Limit in Linux Mint
I found myself frequently having Google Chrome crash when opening many tabs across several windows (don't judge me) on my Linux Mint 17 Cinnamon desktop.
Upon further investigation, I found that Chrome was throwing some error about too many open files.
If you look at the configured default open limit, it's very low (like 1024).
Note: I saw several people modifying their fs.file-max settings, but that made Cinnamon not even want to load and played hardcore shenanigans with my machine, overall.
Upon further investigation, I found that Chrome was throwing some error about too many open files.
~ tail -f ~/.xsession-errorsThe error, I can't remember, but was something in shared_memory_posix about /dev/shm/.com.google.Chrome and Too many open files.
If you look at the configured default open limit, it's very low (like 1024).
~ ulimit -nThe fix (so far, so good) seems to be editing /etc/security/limits.conf and adding the following line at the end of the file:
* hard nofile 65535
* soft nofile 65535Reboot the machine, and should be all good.
Note: I saw several people modifying their fs.file-max settings, but that made Cinnamon not even want to load and played hardcore shenanigans with my machine, overall.
Tuesday, September 17, 2013
Setting Up RAID-1 on Linux Mint 15, Revisited using manual commands
So, after I ran the raider program a while back, I had to do some other things with my machine ~ and long story short, I had to run raider once again. Only this time, I didn't do it right (I used the machine for a full day after running the initial raider setup and before swapping the drives - meaning that when I finally got around to completing the "raider --run" step, things were out of sync and crapped out)...
Originally, I started with my /dev/sda having all my stuff on it and /dev/sdb being the blank drive I wanted to add as a new member of a RAID-1 array... but, instead, (thanks to my screw-up) I ended up with /dev/sdb getting my system (thanks to raider's first step) and /dev/sda getting completely erased. Not a complete disaster, since I still had all my stuff on at least one of the drives, but I didn't have the time to wait for two more rebuilds, nor the patience to swap drives and stuff. So, I decided to jump in and try to figure out mdadm, proper... at least this was a good introduction to a little bit of the utility.
Since my mdraid had already been created (again, thanks to raider), all I needed to do was manage the array. So, the first thing I wanted to do was copy sdb to sda. This turns out to be pretty easy:
Originally, I started with my /dev/sda having all my stuff on it and /dev/sdb being the blank drive I wanted to add as a new member of a RAID-1 array... but, instead, (thanks to my screw-up) I ended up with /dev/sdb getting my system (thanks to raider's first step) and /dev/sda getting completely erased. Not a complete disaster, since I still had all my stuff on at least one of the drives, but I didn't have the time to wait for two more rebuilds, nor the patience to swap drives and stuff. So, I decided to jump in and try to figure out mdadm, proper... at least this was a good introduction to a little bit of the utility.
Since my mdraid had already been created (again, thanks to raider), all I needed to do was manage the array. So, the first thing I wanted to do was copy sdb to sda. This turns out to be pretty easy:
- First, copy the partition table from your source drive to the new RAID member drive:
# sfdisk -d /dev/sdb | sfdisk /dev/sda - Then, add the erased disk to the raid array:
- Add /boot partition
# mdadm --manage /dev/md0 --add /dev/sda1 - Add / partition
# mdadm --manage /dev/md1 --add /dev/sda3 - Add /home partition
# mdadm --manage /dev/md2 --add /dev/sda4
At this point, I checked the array (# cat /proc/mdstat) and discovered, to my surprise, that the array was already rebuilding (i.e. copying stuff from my sdb to my sda)!
Mission accomplished: I can now continue using my machine while the array rebuilds in the background. Since that will take another day or so, I'll continue this if there are other issues.....
To be continued?!
Friday, September 6, 2013
Setting Up RAID-1 on Linux Mint 15, Made Easy
Having come from the RHEL/CentOS world, it was a bit of a surprise to me when I installed Linux Mint 15, and noticed the lack of a software RAID setup step during the install process. But this can easily be done once installation is complete ("easy," relative to manually running mdadm, anyway).
Here's a handy little tool I found to ease the implementation of software RAID-1: raider (http://raider.sourceforge.net/).
By using this tool, I didn't lose any data or anything... just installed the OS and then ran raider after the fact. It really wasn't too painful (just takes some time and requires pulling cables a couple times). And, don't be put off by the low version number ~ it worked great for me, for this purpose.
My hardware consists of two hard drives, as follows:
(no LVM, no encryption - just plain, pure hardware devices)
/dev/sda (2TB WD-black - Linux Mint 15 already installed)
/dev/sdb (2TB WD-black - some previous file system I don't care about anymore)
If you follow the directions (including physically swapping the two hard drives' SATA cables and allowing them to rebuild), you'll get great results... at least I did.
Here's a handy little tool I found to ease the implementation of software RAID-1: raider (http://raider.sourceforge.net/).
By using this tool, I didn't lose any data or anything... just installed the OS and then ran raider after the fact. It really wasn't too painful (just takes some time and requires pulling cables a couple times). And, don't be put off by the low version number ~ it worked great for me, for this purpose.
My hardware consists of two hard drives, as follows:
(no LVM, no encryption - just plain, pure hardware devices)
/dev/sda (2TB WD-black - Linux Mint 15 already installed)
/dev/sdb (2TB WD-black - some previous file system I don't care about anymore)
If you follow the directions (including physically swapping the two hard drives' SATA cables and allowing them to rebuild), you'll get great results... at least I did.
- Download and then extract the package (I used version 0.13.2) anywhere (I used my ~/).
- CD into the raider directory and install:
$ sudo ./install.sh - Make sure you have the required packages installed (I only needed mdadm):
$ sudo apt-get install mdadm parted sfdisk hdparm rsync bc wget - Make sure you have at least one of the following installed (I already had mkinitramfs): dracut, mkinitcpio, mkinitramfs, mkinitrd, or genkernel
- Backup your stuff! (duh) ~ I didn't need to resort to it, but you never know.
- Reboot into single user mode:
- When booting, hold the shift key until you see the GRUB boot screen.
- Press "e" to edit the line, and append the word "single" (without quotes) at the end of the line (in my case, after the "ro quiet splash" part)
- Press "Ctrl" + "x" to execute the boot (it'll drop you into command line mode)
- Log in as root.
- Run the following command... raider will use the drive in the first physical slot (almost always sda) as the "source" drive and you will lose whatever is on the second drive (sdb). Be aware that this might take a long time ~ it took around 5 hours with my 2TB drive that had about 500GB of stuff on it.
# raider -R1 sda sdb - Note: If your sdb had stuff on it before (especially RAID), you might get a message about needing to erase it. If you do, just run the command it says to run: raider --erase /dev/sdb
- Note: If you get the "fatal error" about not being able to format a particular partition (in my case, it was the swap partition of the old disk), then you'll need to first do an fdisk on sdb, and delete all the partitions (just to be safe - you don't care about them anymore, anyway, right?). Be sure to reboot after you write the changes in fdisk. Then run the raider command again.
- Once that completes...
- Shutdown the machine
- Physically swap the two hard drives' data connectors with each other
- Boot back into single user mode (repeat step 6 above)
- Run the command:
# raider --run - Essentially, we've now simulated a degraded array (sdb isn't complete, but we're booting from it as sda this time), and we're now going to rebuild it. Other than copying the sda's contents to sdb, the added bonus is that we're also testing the array's ability to rebuild here.
- Note: this will take quite a long time... for my 2TB drives, it took about 8 hours!
- Once that completes... (at this point, you should now have two identical drives as members of a functional and working RAID-1 array)
- Shutdown the machine
- Physically swap back the two hard drives' data connectors with each other (the way they were originally)
- Boot back up as you normally do... you're all done!
Subscribe to:
Posts (Atom)