From: noi on
On Fri, 06 Oct 2006 03:49:41 +0000, Ohmster wrote this:

> noi <noi(a)siam.com> wrote in
> news:_3JUg.11229$6S3.4315(a)newssvr25.news.prodigy.net:
>
>> Found an article (snippet below) on recovering RAID and LVM volumes.
>> It's not for the faint hearted. Plz read the recovering Lvm2 volume
>> part of article (and displays) entirely. BTW /dev/md2 is raid drive so
>> I'd think you'd substitute /dev/hdb.
>
> Dude this might work! I did follow the instructions that you gave me and
> also referred to the original web page here:
>
> http://www.linuxjournal.com/article/8874
>
> I created the backup file as described in the article, my VolGroup01 looks
> like this:
> ---------------------------------------------------------------------
> VolGroup01 {
> id = "Pu8j7l-2dEd-GhU7-tBAr-HDq7-r6OF-23O8Jo" seqno = 1
> status = ["RESIZEABLE", "READ", "WRITE"] extent_size = 8192
> max_lv = 0
> max_pv = 0
> physical_volumes {
> pv0 {
> id = "xE0F6K-q9r4-1K3o-iYuh-NlWc-9597-8WGobu" device = "/dev/hdb2"
> status = ["ALLOCATABLE"]
> pe_start = 384
> pe_count = 48593
> }
> }
> # Generated by LVM2: Sun Oct 1 16:33:20 2006
> ---------------------------------------------------------------------
>
> Now that is exactly the same as the author dude's file as in listing 6 on
> his web page, here:
> ---------------------------------------------------------------------
> VolGroup01 {
> id = "xQZqTG-V4wn-DLeQ-bJ0J-GEHB-4teF-A4PPBv" seqno = 1
> status = ["RESIZEABLE", "READ", "WRITE"] extent_size = 65536
> max_lv = 0
> max_pv = 0
> physical_volumes {
> pv0 {
> id = "tRACEy-cstP-kk18-zQFZ-ErG5-QAIV-YqHItA" device = "/dev/md2"
> status = ["ALLOCATABLE"]
> pe_start = 384
> pe_count = 2365
> }
> }
> # Generated by LVM2: Sun Feb 5 22:57:19 2006
> ---------------------------------------------------------------------
>
> Mine came out right, there is no difference other than the actual data. I
> cannot restore my volume group like he says in this part though:
>
> [root(a)recoverybox ~]# vgcfgrestore -f VolGroup01 VolGroup01
> [root(a)recoverybox ~]# vgscan
> Reading all physical volumes. This may take a while... Found volume
> group "VolGroup01" using metadata type lvm2 Found volume group
> "VolGroup00" using metadata type lvm2
> [root(a)recoverybox ~]# pvscan
> PV /dev/md2 VG VolGroup01 lvm2 [73.91 GB / 32.00 MB free]
PV /dev/hda2 VG VolGroup00 lvm2 [18.91 GB / 32.00 MB free]
Total: 2 [92.81 GB] / in use: 2 [92.81 GB] / in no VG: 0 [0 ]
> [root(a)recoverybox ~]# vgchange VolGroup01 -a y
> 1 logical volume(s) in volume group "VolGroup01" now active
> [root(a)recoverybox ~]# lvscan
> ACTIVE '/dev/VolGroup01/LogVol00' [73.88 GB] inherit ACTIVE
> '/dev/VolGroup00/LogVol00' [18.38 GB] inherit ACTIVE
> '/dev/VolGroup00/LogVol01' [512.00 MB] inherit
>
>
> Here is what happens when I try:
>
> [root(a)ohmster recoverybox]# vgcfgrestore -f VolGroup01 VolGroup01
> Parse error at line 19: unexpected token Couldn't read volume group
> metadata.
> Restore failed.
> [root(a)ohmster recoverybox]#
>
> What the hell is the problem? I don't even have a line 19 in my
> VolGroup01 file. The comment line, the very last line, is line 17 and
> there ain't no more. How come this don't work?
>

As best I can. Late for me. But I read one of the comments to the
other night and the commentator said he had to use the full volume group
name not just volgroup1, ie, /dev/vo.....

Only another day till weekend. So I suggest re-reading the article
to see check if we've missed a step before you try the vgcfgrestore again.
I only browsed the end of the article where it discusses LVM recovery and
some of the comments.

> One thing that I am doing that the author dude did is that I am not
> doing this on another machine because I only have one linux machine
> right now. See, this part I did not do:
>
> "To recover, the first thing to do is to move the drive to another
> machine. You can do this pretty easily by putting the drive in a USB2
> hard drive enclosure. It then will show up as a SCSI hard disk device,
> for example, /dev/sda, when you plug it in to your recovery computer.
> This reduces the risk of damaging the recovery machine while attempting
> to install the hardware from the original computer."
>
> You think that might be a problem?

I didn't read that was the author talking about his raid devices or the
lvm recovery process? Not really. I just disconnect a drive when I
experiment.

>
> [root(a)ohmster recoverybox]# vgscan
> Reading all physical volumes. This may take a while... Found volume
> group "VolGroup01" using metadata type lvm2 Found volume group
> "VolGroup00" using metadata type lvm2
> [root(a)ohmster recoverybox]# lvscan
> ACTIVE '/dev/VolGroup00/LogVol00' [184.22 GB] inherit
> ACTIVE
> '/dev/VolGroup00/LogVol01' [1.94 GB] inherit
> [root(a)ohmster recoverybox]# pvscan
> PV /dev/hdb2 VG VolGroup01 lvm2 [189.82 GB / 189.82 GB free] PV
> /dev/hda2 VG VolGroup00 lvm2 [186.19 GB / 32.00 MB free] Total: 2
> [376.00 GB] / in use: 2 [376.00 GB] / in no VG: 0 [0 ]
> [root(a)ohmster recoverybox]#
>
> It looks like the system found both drives, although it is saying that
> hdb2 is empty, it is not, all of my data is in there.
>

Sadly the backup lvm configuration is on the unreadable hdb2.

Really can't say about the pvscan. I thought hda and hdb were different
sized drives.

When you look at it again could run the commands with "-v" or "-vv"
ie, pvscan -v, pvdata -v ?


> Oh man, I am so close now, somebody has to be able to help, what do you
> think, noi? So close but so far, this is major bumming me out. :(

From: Ohmster on
noi <noi(a)siam.com> wrote in news:cFoVg.2180$NE6.1209
@newssvr11.news.prodigy.com:

[snip]
> Sadly the backup lvm configuration is on the unreadable hdb2.
>
> Really can't say about the pvscan. I thought hda and hdb were
different
> sized drives.
>
> When you look at it again could run the commands with "-v" or "-vv"
> ie, pvscan -v, pvdata -v ?
>

Late for work but trying to get a direction to go here. pvdata is not a
recognized command but pcscan with the v o vv works, see output. Have to
run, will be back soon. Thank you so much noi.

[root(a)ohmster ~]# pvdata -v
-bash: pvdata: command not found
[root(a)ohmster ~]# pvscan
PV /dev/hdb2 VG VolGroup01 lvm2 [189.82 GB / 189.82 GB free]
PV /dev/hda2 VG VolGroup00 lvm2 [186.19 GB / 32.00 MB free]
Total: 2 [376.00 GB] / in use: 2 [376.00 GB] / in no VG: 0 [0 ]
[root(a)ohmster ~]# pvscan -v
Wiping cache of LVM-capable devices
Wiping internal VG cache
Walking through all physical volumes
PV /dev/hdb2 VG VolGroup01 lvm2 [189.82 GB / 189.82 GB free]
PV /dev/hda2 VG VolGroup00 lvm2 [186.19 GB / 32.00 MB free]
Total: 2 [376.00 GB] / in use: 2 [376.00 GB] / in no VG: 0 [0 ]
[root(a)ohmster ~]# pvscan -vv
Setting global/locking_type to 1
Setting global/locking_dir to /var/lock/lvm
File-based locking enabled.
Wiping cache of LVM-capable devices
Wiping internal VG cache
Walking through all physical volumes
/dev/ramdisk: size is 32768 sectors
/dev/ramdisk: size is 32768 sectors
/dev/ramdisk: No label detected
/dev/hda: size is 390721968 sectors
/dev/md0: size is 0 sectors
/dev/dm-0: size is 386334720 sectors
/dev/dm-0: size is 386334720 sectors
/dev/dm-0: No label detected
/dev/ram: size is 32768 sectors
/dev/ram: size is 32768 sectors
/dev/ram: No label detected
/dev/hda1: size is 208782 sectors
/dev/hda1: size is 208782 sectors
/dev/hda1: No label detected
/dev/dm-1: size is 4063232 sectors
/dev/dm-1: size is 4063232 sectors
/dev/dm-1: No label detected
/dev/ram2: size is 32768 sectors
/dev/ram2: size is 32768 sectors
/dev/ram2: No label detected
/dev/hda2: size is 390508020 sectors
/dev/hda2: size is 390508020 sectors
/dev/hda2: lvm2 label detected
/dev/ram3: size is 32768 sectors
/dev/ram3: size is 32768 sectors
/dev/ram3: No label detected
/dev/ram4: size is 32768 sectors
/dev/ram4: size is 32768 sectors
/dev/ram4: No label detected
/dev/ram5: size is 32768 sectors
/dev/ram5: size is 32768 sectors
/dev/ram5: No label detected
/dev/ram6: size is 32768 sectors
/dev/ram6: size is 32768 sectors
/dev/ram6: No label detected
/dev/ram7: size is 32768 sectors
/dev/ram7: size is 32768 sectors
/dev/ram7: No label detected
/dev/ram8: size is 32768 sectors
/dev/ram8: size is 32768 sectors
/dev/ram8: No label detected
/dev/ram9: size is 32768 sectors
/dev/ram9: size is 32768 sectors
/dev/ram9: No label detected
/dev/ram10: size is 32768 sectors
/dev/ram10: size is 32768 sectors
/dev/ram10: No label detected
/dev/ram11: size is 32768 sectors
/dev/ram11: size is 32768 sectors
/dev/ram11: No label detected
/dev/ram12: size is 32768 sectors
/dev/ram12: size is 32768 sectors
/dev/ram12: No label detected
/dev/ram13: size is 32768 sectors
/dev/ram13: size is 32768 sectors
/dev/ram13: No label detected
/dev/ram14: size is 32768 sectors
/dev/ram14: size is 32768 sectors
/dev/ram14: No label detected
/dev/ram15: size is 32768 sectors
/dev/ram15: size is 32768 sectors
/dev/ram15: No label detected
/dev/hdb: size is 398297088 sectors
/dev/hdb1: size is 208782 sectors
/dev/hdb1: size is 208782 sectors
/dev/hdb1: No label detected
/dev/hdb2: size is 398074635 sectors
/dev/hdb2: size is 398074635 sectors
/dev/hdb2: lvm2 label detected
/dev/hdb2: lvm2 label detected
/dev/hdb2: lvm2 label detected
/dev/hda2: lvm2 label detected
/dev/hda2: lvm2 label detected
PV /dev/hdb2 VG VolGroup01 lvm2 [189.82 GB / 189.82 GB free]
PV /dev/hda2 VG VolGroup00 lvm2 [186.19 GB / 32.00 MB free]
Total: 2 [376.00 GB] / in use: 2 [376.00 GB] / in no VG: 0 [0 ]
[root(a)ohmster ~]#


Any more commands to shed light? There is such a thing as backup and
archive in /etc/lvm.

root(a)ohmster lvm]# pwd
/etc/lvm
[root(a)ohmster lvm]# ls -la
total 64
drwxr-xr-x 4 root root 4096 Sep 30 16:48 .
drwxr-xr-x 106 root root 12288 Oct 5 06:21 ..
drwx------ 2 root root 4096 Oct 1 16:33 archive
drwx------ 2 root root 4096 Oct 1 16:33 backup
-rw------- 1 root root 1282 Oct 6 08:24 .cache
-rw-r--r-- 1 root root 10538 Feb 11 2006 lvm.conf
[root(a)ohmster lvm]#

[root(a)ohmster lvm]# cd backup
[root(a)ohmster backup]# ls -la
total 24
drwx------ 2 root root 4096 Oct 1 16:33 .
drwxr-xr-x 4 root root 4096 Sep 30 16:48 ..
-rw------- 1 root root 1324 Oct 1 16:33 VolGroup00
-rw------- 1 root root 717 Oct 1 16:33 VolGroup01
[root(a)ohmster backup]#

The VolGroup01 backup you can see is a much smaller file. This one does
not show the logical drives in hdb like the VolGroup00 does for hda. In
the archive directory, they show both VolGroups...

[root(a)ohmster archive]# ls -la
total 76
drwx------ 2 root root 4096 Oct 1 16:33 .
drwxr-xr-x 4 root root 4096 Sep 30 16:48 ..
-rw------- 1 root root 1361 Sep 30 16:48 VolGroup00_00000.vg
-rw------- 1 root root 1341 Oct 1 16:27 VolGroup00_00001.vg
-rw------- 1 root root 1314 Oct 1 16:27 VolGroup00_00002.vg
-rw------- 1 root root 1314 Oct 1 16:27 VolGroup00_00003.vg
-rw------- 1 root root 1314 Oct 1 16:28 VolGroup00_00004.vg
-rw------- 1 root root 1314 Oct 1 16:28 VolGroup00_00005.vg
-rw------- 1 root root 1314 Oct 1 16:28 VolGroup00_00006.vg
-rw------- 1 root root 1314 Oct 1 16:28 VolGroup00_00007.vg
-rw------- 1 root root 1328 Oct 1 16:28 VolGroup00_00008.vg
-rw------- 1 root root 1327 Oct 1 16:31 VolGroup00_00009.vg
-rw------- 1 root root 1328 Oct 1 16:31 VolGroup00_00010.vg
-rw------- 1 root root 1325 Oct 1 16:33 VolGroup00_00011.vg
-rw------- 1 root root 1325 Oct 1 16:33 Vol
From: noi on
On Fri, 06 Oct 2006 12:28:29 +0000, Ohmster wrote this:

> noi <noi(a)siam.com> wrote in news:cFoVg.2180$NE6.1209
> @newssvr11.news.prodigy.com:
>
> [snip]
>> Sadly the backup lvm configuration is on the unreadable hdb2.
>>
>> Really can't say about the pvscan. I thought hda and hdb were
> different
>> sized drives.
>>
>> When you look at it again could run the commands with "-v" or "-vv" ie,
>> pvscan -v, pvdata -v ?
>>
>>
> Late for work but trying to get a direction to go here. pvdata is not a
> recognized command but pcscan with the v o vv works, see output. Have to
> run, will be back soon. Thank you so much noi.
>
> [root(a)ohmster ~]# pvdata -v

Maybe it was replaced by pvdisplay

> -bash: pvdata: command not found
> [root(a)ohmster ~]# pvscan
> PV /dev/hdb2 VG VolGroup01 lvm2 [189.82 GB / 189.82 GB free] PV
> /dev/hda2 VG VolGroup00 lvm2 [186.19 GB / 32.00 MB free] Total: 2
> [376.00 GB] / in use: 2 [376.00 GB] / in no VG: 0 [0 ]
> [root(a)ohmster ~]# pvscan -v
> Wiping cache of LVM-capable devices
> Wiping internal VG cache
> Walking through all physical volumes
> PV /dev/hdb2 VG VolGroup01 lvm2 [189.82 GB / 189.82 GB free] PV
> /dev/hda2 VG VolGroup00 lvm2 [186.19 GB / 32.00 MB free] Total: 2
> [376.00 GB] / in use: 2 [376.00 GB] / in no VG: 0 [0 ]

snip -vv not very helpful.

>
>
> Any more commands to shed light? There is such a thing as backup and
> archive in /etc/lvm.
>

Thinking.

Would you have a backup that included /etc on FC3 prior to installing the
FC5? This guy used a 6 month old backup to restore his data.

http://codeworks.gnomedia.com/archives/2005/general/lvm_recovery/

Since hdb2 is now v01 have you tried to mount it as a LVM?

$ mount /dev/VolGroup01/LogVol00 /mnt/VG01/LV_FC3


> root(a)ohmster lvm]# pwd
> /etc/lvm
> [root(a)ohmster lvm]# ls -la
> total 64
> drwxr-xr-x 4 root root 4096 Sep 30 16:48 . drwxr-xr-x 106 root root
> 12288 Oct 5 06:21 .. drwx------ 2 root root 4096 Oct 1 16:33 archive
> drwx------ 2 root root 4096 Oct 1 16:33 backup -rw------- 1 root
> root 1282 Oct 6 08:24 .cache -rw-r--r-- 1 root root 10538 Feb 11 2006
> lvm.conf [root(a)ohmster lvm]#
>
> [root(a)ohmster lvm]# cd backup
> [root(a)ohmster backup]# ls -la
> total 24
> drwx------ 2 root root 4096 Oct 1 16:33 . drwxr-xr-x 4 root root 4096 Sep
> 30 16:48 .. -rw------- 1 root root 1324 Oct 1 16:33 VolGroup00 -rw-------
> 1 root root 717 Oct 1 16:33 VolGroup01 [root(a)ohmster backup]#
>
> The VolGroup01 backup you can see is a much smaller file. This one does
> not show the logical drives in hdb like the VolGroup00 does for hda. In
> the archive directory, they show both VolGroups...
>
> [root(a)ohmster archive]# ls -la
> total 76
> drwx------ 2 root root 4096 Oct 1 16:33 . drwxr-xr-x 4 root root 4096 Sep
> 30 16:48 .. -rw------- 1 root root 1361 Sep 30 16:48 VolGroup00_00000.vg
> -rw------- 1 root root 1341 Oct 1 16:27 VolGroup00_00001.vg -rw------- 1
> root root 1314 Oct 1 16:27 VolGroup00_00002.vg -rw------- 1 root root
> 1314 Oct 1 16:27 VolGroup00_00003.vg -rw------- 1 root root 1314 Oct 1
> 16:28 VolGroup00_00004.vg -rw------- 1 root root 1314 Oct 1 16:28
> VolGroup00_00005.vg -rw------- 1 root root 1314 Oct 1 16:28
> VolGroup00_00006.vg -rw------- 1 root root 1314 Oct 1 16:28
> VolGroup00_00007.vg -rw------- 1 root root 1328 Oct 1 16:28
> VolGroup00_00008.vg -rw------- 1 root root 1327 Oct 1 16:31
> VolGroup00_00009.vg -rw------- 1 root root 1328 Oct 1 16:31
> VolGroup00_00010.vg -rw------- 1 root root 1325 Oct 1 16:33
> VolGroup00_00011.vg -rw------- 1 root root 1325 Oct 1 16:33
> VolGroup00_00012.vg -rw------- 1 root root 718 Oct 1 16:33
> VolGroup01_00000.vg [root(a)ohmster archive]#
>
> But VolGroup01 did not exist as 01 until after the catastrophe, before
> that, they were both named VolGroup00.
>
> Gotta run! Thanks.

From: Ohmster on
noi <noi(a)siam.com> wrote in news:2awVg.9477$vJ2.2722
@newssvr12.news.prodigy.com:

> Thinking.
>
> Would you have a backup that included /etc on FC3 prior to installing
the
> FC5? This guy used a 6 month old backup to restore his data.
>
> http://codeworks.gnomedia.com/archives/2005/general/lvm_recovery/
>
> Since hdb2 is now v01 have you tried to mount it as a LVM?
>
> $ mount /dev/VolGroup01/LogVol00 /mnt/VG01/LV_FC3

There is no VolGroup01 in dev. I went and tried to find a backup using
the dude's recovery method here:
http://www.linuxjournal.com/article/8874

And the VolGroup01 failed to restore. I went looking through the file
that dd created and found an original text for when the volume was 00 and
that is when things were good. Here is the code from it:

[root(a)ohmster recoverybox]# cat VolGroup00
VolGroup00 {
id = "3agBFX-D3N0-Bp3c-I7se-JVLY-ZIIa-fqMcIF"
seqno = 1
status = ["RESIZEABLE", "READ", "WRITE"]
extent_size = 65536
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "v40FVM-IWMU-T26b-I9eD-UprZ-WNqu-a9MTGq"
device = "/dev/hda2"
status = ["ALLOCATABLE"]
pe_start = 384
pe_count = 6074
}
}
# Generated by LVM2: Sat May 7 22:23:59 2005
[root(a)ohmster recoverybox]#

Only problem is that the hardware shows the drive as /dev/hda2 and the
group as VolGroup00. Currently, that is occupied by my FC5 installation.
I think that the only shot in hell I have of restoring this drive is to
shut down the box, reconnect the drive as hda, then booting with a rescue
or live CD that has LVM support, and then trying to restore the drive and
then renaming it to VolGroup01. That might be the only chance I have to
restore this drive. The sad part is that I cannot find a live CD that
supports LVM nor any instructions on how to do this. This sucks man.
Everything I had was on that disk and now it is gone. :(

--
~Ohmster
theohmster at comcast dot net
Put "messageforohmster" in message body
to pass my spam filter.
From: noi on
On Sat, 07 Oct 2006 21:33:05 +0000, Ohmster wrote this:

> noi <noi(a)siam.com> wrote in news:2awVg.9477$vJ2.2722
> @newssvr12.news.prodigy.com:
>
>> Thinking.
>>
>> Would you have a backup that included /etc on FC3 prior to installing
> the
>> FC5? This guy used a 6 month old backup to restore his data.
>>
>> http://codeworks.gnomedia.com/archives/2005/general/lvm_recovery/
>>
>> Since hdb2 is now v01 have you tried to mount it as a LVM?
>>
>> $ mount /dev/VolGroup01/LogVol00 /mnt/VG01/LV_FC3
>
> There is no VolGroup01 in dev. I went and tried to find a backup using the
> dude's recovery method here:
> http://www.linuxjournal.com/article/8874
>
> And the VolGroup01 failed to restore. I went looking through the file that
> dd created and found an original text for when the volume was 00 and that
> is when things were good. Here is the code from it:
>
> [root(a)ohmster recoverybox]# cat VolGroup00 VolGroup00 {
> id = "3agBFX-D3N0-Bp3c-I7se-JVLY-ZIIa-fqMcIF" seqno = 1
> status = ["RESIZEABLE", "READ", "WRITE"] extent_size = 65536
> max_lv = 0
> max_pv = 0
> physical_volumes {
> pv0 {
> id = "v40FVM-IWMU-T26b-I9eD-UprZ-WNqu-a9MTGq" device = "/dev/hda2"
> status = ["ALLOCATABLE"]
> pe_start = 384
> pe_count = 6074
> }
> }
> # Generated by LVM2: Sat May 7 22:23:59 2005 [root(a)ohmster recoverybox]#
>
> Only problem is that the hardware shows the drive as /dev/hda2 and the
> group as VolGroup00. Currently, that is occupied by my FC5 installation. I
> think that the only shot in hell I have of restoring this drive is to shut
> down the box, reconnect the drive as hda, then booting with a rescue or
> live CD that has LVM support, and then trying to restore the drive and
> then renaming it to VolGroup01. That might be the only chance I have to
> restore this drive. The sad part is that I cannot find a live CD that
> supports LVM nor any instructions on how to do this. This sucks man.
> Everything I had was on that disk and now it is gone. :(

Well. I'm also confused as to which of your volgroups is on which hd.
I'm thinking but maybe you could retrace what you've done from begining
to see if you've overlooked a step. And avoid Webmin for now.

IIUYC, you're ready for more experiments? Just be careful not to write
to you FC3 drive or change file permissions, ownership on FC3.

Here's the thing IMO you could change your bios to "not installed" for hda
and change the boot sequence to CDROM,IDE-1 or however your bios
determines the hd boot drive.

I'm sure you can download and install packages to LiveCDs but those
packages go away once you reboot the LiveCD. So, if you want to install
LVM to the Knoppix LiveCD try instructions at

http://www.knoppix.net/wiki/LVM2

Maybe with just the FC3 drive, Knoppix LiveCD and install of LVM to
Knoppix you could try

first to mount the FC3 volgroup and backup your data

If that fails that try the recovery procedures from the previous html
links.

First I'd try to download and install LVM for the Knoppix LiveCD.
Although it did mount the FC5 LVM volumes didn't it? Maybe the LVM2
tools are already on the CD and you can try the recovery using Knoppix and
just the FC3 drive.