From: John Rumm on
On 24/06/2010 12:42, dennis(a)home wrote:

>>> PS what do you think are the obvious reasons they died out?
>>
>> "Died out" is perhaps over egging it - it never really got started.
>> There were several problems; Price was a significant factor in the
>> first place - it required bespoke drives
>
> Well that's not exactly true, most of the drives at the time had sync
> connectors and you just didn't use them if you didn't need them.
>
>> and controllers with non standard interfaces.
>
> The interfaces were standards at the time there was nothing special
> about the drives compared to other drives.

Just how far are you going back here? RAID and the numbered levels were
not even defined as a standard concept until the late 80's and by that
time ST506 interfaced drives were coming to the end of their era. Early
IDE systems were already appearing, and "standard" RAID was pretty rare
on anything other than IDE or SCSI drives.

(I have no doubt there were some proprietary systems about that tried
RAID 2 like tricks to eek some extra performance out of the drives of
the day however, but would not call them "proper RAID" to borrow your
phrase).

>> It also used a comparatively large number of drives compared to other
>> RAID setups as well, without providing the performance or redundancy
>> advantages either.
>
> They were certainly redundant.

RAID 2 does not include redundancy in the accepted sense as it holds
only one copy of the data. A system could tolerate a single drive
failure, not through redundancy, but via the error correcting
capabilities of hamming coding (which will allow the correction of a
single bit error in a byte when encoded as a 12 bit hamming word).

> Performance relative to single drives was very quick.
>
> Most drives rotated quite slowly and it took a lot longer to read the
> data from a single drive than an array, the latency was the same.
>
>>
>> As a technology it was rendered obsolete almost immediately when the
>> drive manufacturers included at first equal (and shortly later,
>> superior) FEC within their drive firmware.
>
> There was little or no firmware on drives at the time.

You are harking back to ST506 interface drives again, generally before
RAID made much impact.

> The integration of controllers did kill them off.
> People like me were responsible as we decided to use SCSI and make the
> controller designers do something more useful.
> At the time every computer would have its own controller design, that
> was a complete waste of time when there weren't enough engineers around
> to design more useful bits like bit slice CPUs and bubble memory cards!

Early 80s perhaps - even as early as '86 ish plenty of drive controllers
were implemented using off the shelf LSI chip sets (still quite often a
fair size ISA card mind you). By the late 80's a MFM drive controller
card cost peanuts (relatively speaking).

> I actually went to a disk drive conference (I don't really know why) in
> the early eighties, there were some real die hard engineers there that
> would come along and sensing that I didn't design controllers start
> spewing un-decipherable jargon. They looked a bit shocked when I said we
> were going to use SCSI as you couldn't even buy a SCSI disk at the time.
> A year later things were different.

IIRC I bought my first HDD about '87. A "huge" 42MB seagate. SCSI would
have been a better match for the system, but was out of my price range
at the time...

>> That meant that a pair of mirrored drives on standard controllers
>> offered better reliability at a fraction of the cost. So game over for
>> RAID 2.
>
> I never saw a mirrored pair of drives at the time, it just wasn't going
> to happen as you needed the parity bits to do correction and redundancy.

I think we are discussing matters at a slight time skew here...

--
Cheers,

John.

/=================================================================\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\=================================================================/
From: Paul Bird on
Jon Green wrote:
<snip>
> Certainly not a case of a fatal failure within minutes or hours,
> although Roland's right that it can sometimes go catastrophic a lot more
> quickly.
>
> Jon
> (* Sorry to bang on about it -- it's the best example I have of a fully
> instrumented drive failure; usually we just swap regardless, ASAP.)

You would *not* have wanted to be in charge of the radio station on a
cruise ship where one afternoon the ships comms started playing up, with
no new drives onboard, and I spent the evening starting to get a copy of
the data off onto another machine, begged the C/eng to get me a new
drive pronto (we were alongside) which he did thank heavens, got some
sleep, looked at it in the morning only to find it was worse, and got
enough off it restart the system when the new drive came onboard. My
experience is the quality of the equipment onboard is in inverse
proportion to the amount of money the pax are paying.

That's why I said when a drive starts to go, it can go in hours. Which
is not funny when it's not part of a RAID setup.

PB
From: dennis on


"John Rumm" <see.my.signature(a)nowhere.null> wrote in message
news:5b2dnXjjK-cQ6r7RnZ2dnUVZ8vGdnZ2d(a)brightview.co.uk...
> On 24/06/2010 12:42, dennis(a)home wrote:
>
>>>> PS what do you think are the obvious reasons they died out?
>>>
>>> "Died out" is perhaps over egging it - it never really got started.
>>> There were several problems; Price was a significant factor in the
>>> first place - it required bespoke drives
>>
>> Well that's not exactly true, most of the drives at the time had sync
>> connectors and you just didn't use them if you didn't need them.
>>
>>> and controllers with non standard interfaces.
>>
>> The interfaces were standards at the time there was nothing special
>> about the drives compared to other drives.
>
> Just how far are you going back here?

Far enough to predate anything most people have seen or even heard of.
I have been in computer engineering a lot longer than most people.


> RAID and the numbered levels were not even defined as a standard concept
> until the late 80's and by that time ST506 interfaced drives were coming
> to the end of their era.

I predate st506.

> Early IDE systems were already appearing, and "standard" RAID was pretty
> rare on anything other than IDE or SCSI drives.
>
> (I have no doubt there were some proprietary systems about that tried RAID
> 2 like tricks to eek some extra performance out of the drives of the day
> however, but would not call them "proper RAID" to borrow your phrase).

Oh i would, proper RAID distinguishes them from RAID.

>
>>> It also used a comparatively large number of drives compared to other
>>> RAID setups as well, without providing the performance or redundancy
>>> advantages either.
>>
>> They were certainly redundant.
>
> RAID 2 does not include redundancy in the accepted sense as it holds only
> one copy of the data. A system could tolerate a single drive failure, not
> through redundancy, but via the error correcting capabilities of hamming
> coding (which will allow the correction of a single bit error in a byte
> when encoded as a 12 bit hamming word).
>
>> Performance relative to single drives was very quick.
>>
>> Most drives rotated quite slowly and it took a lot longer to read the
>> data from a single drive than an array, the latency was the same.
>>
>>>
>>> As a technology it was rendered obsolete almost immediately when the
>>> drive manufacturers included at first equal (and shortly later,
>>> superior) FEC within their drive firmware.
>>
>> There was little or no firmware on drives at the time.
>
> You are harking back to ST506 interface drives again, generally before
> RAID made much impact.
>
>> The integration of controllers did kill them off.
>> People like me were responsible as we decided to use SCSI and make the
>> controller designers do something more useful.
>> At the time every computer would have its own controller design, that
>> was a complete waste of time when there weren't enough engineers around
>> to design more useful bits like bit slice CPUs and bubble memory cards!
>
> Early 80s perhaps - even as early as '86 ish plenty of drive controllers
> were implemented using off the shelf LSI chip sets (still quite often a
> fair size ISA card mind you). By the late 80's a MFM drive controller card
> cost peanuts (relatively speaking).
>
>> I actually went to a disk drive conference (I don't really know why) in
>> the early eighties, there were some real die hard engineers there that
>> would come along and sensing that I didn't design controllers start
>> spewing un-decipherable jargon. They looked a bit shocked when I said we
>> were going to use SCSI as you couldn't even buy a SCSI disk at the time.
>> A year later things were different.
>
> IIRC I bought my first HDD about '87. A "huge" 42MB seagate. SCSI would
> have been a better match for the system, but was out of my price range at
> the time...

When I started they were in the 5 MB range and were 14" dia, you built
controllers with RLL compression and stuff like that.
Typically they would occupy a couple of MB1 sized cards or a bit more.

>
>>> That meant that a pair of mirrored drives on standard controllers
>>> offered better reliability at a fraction of the cost. So game over for
>>> RAID 2.
>>
>> I never saw a mirrored pair of drives at the time, it just wasn't going
>> to happen as you needed the parity bits to do correction and redundancy.
>
> I think we are discussing matters at a slight time skew here...


I know that and I have indicated so in earlier posts.

We could continue but archaeology isn't my best subject.

> --
> Cheers,
>
> John.
>
> /=================================================================\
> | Internode Ltd - http://www.internode.co.uk |
> |-----------------------------------------------------------------|
> | John Rumm - john(at)internode(dot)co(dot)uk |
> \=================================================================/

From: dennis on


"Huge" <Huge(a)nowhere.much.invalid> wrote in message
news:88hbnuF7rjU2(a)mid.individual.net...
> On 2010-06-24, John Rumm <see.my.signature(a)nowhere.null> wrote:
>> On 24/06/2010 12:42, dennis(a)home wrote:
>
> John, you're arguing with 'dennis the erroneous'. Why?

Because he likes to learn things, you can't learn things.

From: Jon Green on
On 24/06/2010 16:27, Paul Bird wrote:
> Jon Green wrote:
>> Certainly not a case of a fatal failure within minutes or hours,
>> although Roland's right that it can sometimes go catastrophic a lot
>> more quickly.
>
> You would *not* have wanted to be in charge of the radio station on a
> cruise ship where one afternoon the ships comms started playing up, with
> no new drives onboard, and I spent the evening starting to get a copy of
> the data off onto another machine, begged the C/eng to get me a new
> drive pronto (we were alongside) which he did thank heavens, got some
> sleep, looked at it in the morning only to find it was worse, and got
> enough off it restart the system when the new drive came onboard.

You're right -- I wouldn't! I've done enough heroics in my time to want
to prevent rather than cure where possible.

> My
> experience is the quality of the equipment onboard is in inverse
> proportion to the amount of money the pax are paying.

I'm not greatly surprised, TBH. The bigger the money, the more
boneheaded number-crunchers you'll find getting in the way of common
sense, each eager to demonstrate that they've justified their salary by
achieving cuts in "unnecessary overheads". Like, just for instance,
eliminating enough shelf stock that a ship days from port can no longer
remain self-sufficient. I've been in analogous situations, and I know
your pain!

> That's why I said when a drive starts to go, it can go in hours. Which
> is not funny when it's not part of a RAID setup.

Oh, quite. Any mission-critical kit should have fallbacks planned and
implemented as part of its specification. Even something as banal as
the shipboard radio station is something the pax will notice missing.
If you have to cite "technical problems" as the reason, your bejewelled
and fur-wrapped customers will be wondering what else is badly
maintained. How about the bridge equipment? The escape davits?

Tar, ship, caulking for the use of, ha'p'orth thereof.

Jon
--
SPAM BLOCK IN USE! To reply in email, replace 'deadspam'
with 'green-lines'.
Blog: http://bit.ly/45cLHw Pix: http://bit.ly/d8V2NJ
Website: http://www.green-lines.com/
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Prev: Adobe Photoshop 4 Out of memory
Next: Couple of Questions