From: Marty on
Hi all,
I upgraded suse 11.2 to 11.3. Everything went ok. File system ext4.
There are 4 partitions, one / , one /home and 2 other partitions for
user data - /dev/sda5 - /dev/sda6 and /dev/sda7.

Everything was working ok on 11.2 and for a few days on 11.3.

For 3 days now, after every boot or reboot, the system stops during boot and
tells me that all 3 user partitions have unexpected inconsistencies, and
that I have to run fsck manually. The root partition is fine.

I provide my root password for login and run 'fsck /dev/sda5' (same for
6 and 7) , no error message then (0.1%, 0.2 and 0.9% non-contiguous), I
reboot with Control-D and the system boots fine.

On the next boot or reboot the same thing.

I doubt that the hard drive is damaged, it's fairly new.

I haven't found a solution anywhere, any advice is appreciated.

Many thanks, Marty
From: Pete Puma on
Marty wrote:

> Hi all,
> I upgraded suse 11.2 to 11.3. Everything went
ok. File system ext4.
> There are 4 partitions, one / , one /home and 2
other partitions for
> user data - /dev/sda5 - /dev/sda6 and /dev/sda7.
>

> Everything was working ok on 11.2 and for a few days on 11.3.
>
> For
3 days now, after every boot or reboot, the system stops during boot
> and
tells me that all 3 user partitions have unexpected inconsistencies,
> and
that I have to run fsck manually. The root partition is fine.
>
> I
provide my root password for login and run 'fsck /dev/sda5' (same for
> 6
and 7) , no error message then (0.1%, 0.2 and 0.9% non-contiguous), I
>
reboot with Control-D and the system boots fine.
>
> On the next boot or
reboot the same thing.
>
> I doubt that the hard drive is damaged, it's
fairly new.
>
> I haven't found a solution anywhere, any advice is
appreciated.
>
> Many thanks, Marty

Does this fsck run while the disk is
mounted? At the point of login, I think the disk(s) are all mounted.
I
use "shutdown -rF now" for a fsck on reboot.
Hope it helps.

And if you
get that smoothed out:
A good SMART utility could offer some insight into
your drives once you get going. Used "gsmartcontrol" all throughout 11.2
and now 11.3 and it continues to work as smooth as it's Windows
counterpart.

FYI: I just had 3 new drives, fresh from the local comp
supermarket, refuse to mount. Western Digital shot 3 replacements back to
me in 3 days, all of which worked. Can't believe I went 3 bad out of 3 new
ones, but it happens.

From: Marty on
Thanks for that, yes, drive is not mounted when fsck is runs manually.
Fsck warns that it must be unmounted.
I have run it several times, also from 'rescue system'.
The way I understand it is that fsck does want to run at boot but for some
reason doesn't. After I run it manually the machine boots and runs fine. I
checked the partitions with fsck again when the machine was running and fsck
reported not errors at all. I wonder if suse does not shut down properly
and cleanly, which would explain the ongoing checks after boot.

BTW, my drive is a Western Digital, it's been working fine for almost a
year. Never had a disk failing on Linux in 11 years. My guess is still that
something went wrong during the upgrade from 11.2 to 11.3 . Will check out
"gsmartcontrol".

Meanwhile the problem is still not solved and I put the machine into
hybernate for the time being instead of shutting down.

Cheers Marty


>
> Does this fsck run while the disk is
> mounted? At the point of login, I think the disk(s) are all mounted.
> I
> use "shutdown -rF now" for a fsck on reboot.
> Hope it helps.
>
> And if you
> get that smoothed out:
> A good SMART utility could offer some insight into
> your drives once you get going. Used "gsmartcontrol" all throughout 11.2
> and now 11.3 and it continues to work as smooth as it's Windows
> counterpart.
>
> FYI: I just had 3 new drives, fresh from the local comp
> supermarket, refuse to mount. Western Digital shot 3 replacements back to
> me in 3 days, all of which worked. Can't believe I went 3 bad out of 3 new
> ones, but it happens.

From: mjt on
On Sun, 25 Jul 2010 22:42:41 +1200
Marty <not(a)this.address> wrote:

> BTW, my drive is a Western Digital, it's been working fine for almost
> a year. Never had a disk failing on Linux in 11 years. My guess is
> still that something went wrong during the upgrade from 11.2 to
> 11.3 .

Possibly. I always tell folks to do a clean install
when going up a version(s) and to not do an "upgrade".

I still have 11.2 lingering about (and earlier versions).
I installed 11.3 to its own dedicated partition. I do this
because, what if 11.3 gets hosed or there is an issue with
some software that I *have* to use (?). If there is an issue,
I can always boot 11.2 and keep on trucking.

--
T: One big monster, he called TROLL.
He don't rock, and he don't roll;
Drink no wine, and smoke no stogies.
He just Love To Eat Them Roguies.
-- The Roguelet's ABC
<<< Remove YOURSHOES to email me >>>

From: mjt on
On Mon, 26 Jul 2010 01:07:39 +0200
houghi <houghi(a)houghi.org.invalid> wrote:

> mjt wrote:
> > Possibly. I always tell folks to do a clean install
> > when going up a version(s) and to not do an "upgrade".
>
> I always tell people to do an upgrade and if that fails, you can
> always do an install.

So a person has wasted their time doing the upgrade
that failed - better to do a clean install the
first time and be done with it ... it takes about
the same amount of time

--
Xerox never comes up with anything original.
<<< Remove YOURSHOES to email me >>>