From: dennis on


"Bob Eager" <rde42(a)spamcop.net> wrote in message
news:88glroFtvfU23(a)mid.individual.net...


> We repeat: it wasn't RAID. It wasn't Inexpensive.

Inexpensive is not a quantitive measure.
What is inexpensive to one person is expensive to another.
You can't actually define inexpensive so its not going to win the argument.

The main differences were in the distribution of controllers, they used to
be boards full of complex electronics with cheaper disks without
controllers. Disks with controllers (e.g. SCSI) were the expensive option
and didn't work with bitwise arrays. Therefore the array which shared the
controller and used dumb drives was the inexpensive option (note that no
drives were inexpensive then, some were a few hundred dollars cheaper but
still cost as much as a car these days).

It was only when disk controllers became a chip or two and were integrated
into the drive that they became cheaper and that had as much to do with
decreasing physical size as with integrating electronics.

That's the trouble with kids, they just don't know what happened in the
past. ;-)

From: John Rumm on
On 24/06/2010 09:08, dennis(a)home wrote:
>
>
> "John Rumm" <see.my.signature(a)nowhere.null> wrote in message
> news:3dadnUddYdB5Bb_RnZ2dnUVZ8imdnZ2d(a)brightview.co.uk...
>
>> Some early implementations of RAID level 2 tried this. Bit level
>> splitting of data over some drives and applying FEC to generate check
>> and correct bits on parity drives.
>>
>> Its not used these days since for obvious reasons - and it kind of
>> went against the whole raid philosophy in the first place, so calling
>> it "proper RAID" is a Dennis'ism really.
>
> Show me where I called it a proper RAID?

Erm , how about a couple of posts back where you said "Ah well that was
probably in the days of proper RAIDs. The ones where it was done bitwise
across the disks and all the spindles and heads were synchronised.
They were expensive."

> You are reading stuff that hasn't been written.

See above...

> PS what do you think are the obvious reasons they died out?

"Died out" is perhaps over egging it - it never really got started.
There were several problems; Price was a significant factor in the first
place - it required bespoke drives and controllers with non standard
interfaces. It also used a comparatively large number of drives compared
to other RAID setups as well, without providing the performance or
redundancy advantages either.

As a technology it was rendered obsolete almost immediately when the
drive manufacturers included at first equal (and shortly later,
superior) FEC within their drive firmware. That meant that a pair of
mirrored drives on standard controllers offered better reliability at a
fraction of the cost. So game over for RAID 2.


--
Cheers,

John.

/=================================================================\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\=================================================================/
From: dennis on


"John Rumm" <see.my.signature(a)nowhere.null> wrote in message
news:vYGdnQZDbfZko77RnZ2dnUVZ8r2dnZ2d(a)brightview.co.uk...
> On 24/06/2010 09:08, dennis(a)home wrote:
>>
>>
>> "John Rumm" <see.my.signature(a)nowhere.null> wrote in message
>> news:3dadnUddYdB5Bb_RnZ2dnUVZ8imdnZ2d(a)brightview.co.uk...
>>
>>> Some early implementations of RAID level 2 tried this. Bit level
>>> splitting of data over some drives and applying FEC to generate check
>>> and correct bits on parity drives.
>>>
>>> Its not used these days since for obvious reasons - and it kind of
>>> went against the whole raid philosophy in the first place, so calling
>>> it "proper RAID" is a Dennis'ism really.
>>
>> Show me where I called it a proper RAID?
>
> Erm , how about a couple of posts back where you said "Ah well that was
> probably in the days of proper RAIDs. The ones where it was done bitwise
> across the disks and all the spindles and heads were synchronised.
> They were expensive."
>
>> You are reading stuff that hasn't been written.
>
> See above...
>
>> PS what do you think are the obvious reasons they died out?
>
> "Died out" is perhaps over egging it - it never really got started. There
> were several problems; Price was a significant factor in the first place -
> it required bespoke drives

Well that's not exactly true, most of the drives at the time had sync
connectors and you just didn't use them if you didn't need them.

> and controllers with non standard interfaces.

The interfaces were standards at the time there was nothing special about
the drives compared to other drives.

> It also used a comparatively large number of drives compared to other RAID
> setups as well, without providing the performance or redundancy advantages
> either.

They were certainly redundant.
Performance relative to single drives was very quick.

Most drives rotated quite slowly and it took a lot longer to read the data
from a single drive than an array, the latency was the same.

>
> As a technology it was rendered obsolete almost immediately when the drive
> manufacturers included at first equal (and shortly later, superior) FEC
> within their drive firmware.

There was little or no firmware on drives at the time.
The integration of controllers did kill them off.
People like me were responsible as we decided to use SCSI and make the
controller designers do something more useful.
At the time every computer would have its own controller design, that was a
complete waste of time when there weren't enough engineers around to design
more useful bits like bit slice CPUs and bubble memory cards!

I actually went to a disk drive conference (I don't really know why) in the
early eighties, there were some real die hard engineers there that would
come along and sensing that I didn't design controllers start spewing
un-decipherable jargon. They looked a bit shocked when I said we were going
to use SCSI as you couldn't even buy a SCSI disk at the time. A year later
things were different.

> That meant that a pair of mirrored drives on standard controllers offered
> better reliability at a fraction of the cost. So game over for RAID 2.

I never saw a mirrored pair of drives at the time, it just wasn't going to
happen as you needed the parity bits to do correction and redundancy.

From: Jules Richardson on
On Wed, 23 Jun 2010 22:58:01 +0100, Paul Bird wrote:
>>> A low level reformat can sometimes help, but IME it usually only
>>> delays the inevitable.
>>
>> It won't actually write a "low level" format pattern to the drive, but
>> will serve to mark some sectors as "bad". However, it won't stop the
>> rot spreading.
>
> . . . and the rot can spread remarkably quickly. Hours in my
> experience.

Depends on the age of the drive, I've found. In the context of the OP,
correct - but it's not true of all drives, with older (ancient in
computing terms) often surviving for years with a few duff blocks.

(and remember the days when you had to reformat the drive if you changed
its orientation, as otherwise it'd start spewing out errors all over the
place? :-)

cheers

Jules
From: Jon Green on
On 24/06/2010 15:20, Jules Richardson wrote:
> On Wed, 23 Jun 2010 22:58:01 +0100, Paul Bird wrote:
>>>> A low level reformat can sometimes help, but IME it usually only
>>>> delays the inevitable.
>>>
>>> It won't actually write a "low level" format pattern to the drive, but
>>> will serve to mark some sectors as "bad". However, it won't stop the
>>> rot spreading.
>>
>> . . . and the rot can spread remarkably quickly. Hours in my
>> experience.
>
> Depends on the age of the drive, I've found. In the context of the OP,
> correct - but it's not true of all drives, with older (ancient in
> computing terms) often surviving for years with a few duff blocks.

The one from the RAID* carried on for a month before I replaced it.
There were about three incidents of clumps of bad-block reports in that
time. As it was in a RAID anyway, so there was redundancy, I thought it
would be interesting to watch the rate of degradation; fuel for future
IT policies. The last clump of bad blocks was bigger than its
predecessors put together, which seemed like a good time to swap it. :)

Certainly not a case of a fatal failure within minutes or hours,
although Roland's right that it can sometimes go catastrophic a lot more
quickly.

Jon
(* Sorry to bang on about it -- it's the best example I have of a fully
instrumented drive failure; usually we just swap regardless, ASAP.)
--
SPAM BLOCK IN USE! To reply in email, replace 'deadspam'
with 'green-lines'.
Blog: http://bit.ly/45cLHw Pix: http://bit.ly/d8V2NJ
Website: http://www.green-lines.com/
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Prev: Adobe Photoshop 4 Out of memory
Next: Couple of Questions