Filmscanners mailing list archive (filmscanners@halftone.co.uk)
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: filmscanners: Best solution for HD and images
> You should know that not only do striped disks reduce reliability
> and hence
> increase risk but they also increase severity.
As I've said, that's misinformation. Do you have any real MTBF testing data
that backs up your claim, or is it just speculation?
> i.e. any one drive of a multiple striped drive set failing WILL
> lose ALL of
> your data.
This is just a silly statement. You lose ALL your data with a single disk
just the same.
> You should only use this arrangement where you keep very regular backups,
> you use it largely as a scratch area or you can relatively easily recreate
> the data.
ONLY? That's absurd. Over %99.99 of computers are single hard disk
computers...which stand the same or worse chance of data corruption from
hard disk failure. One SHOULD back up, no doubt, and anyone who doesn't is
being foolish, unless they can tolerate a failure. The fact is, disks don't
fail regularly, even single disks. They DO fail though, hence the need for
backup.
> So why are you quoting statistical data when you don't understand basic
> statistical analysis.
Well, I do understand FAR more than basic statistical analysis, especially
when it comes to MTBF, and am more than happy to compare qualifications with
you on this topic off list, if you like.
> The servo actuator will be used less but not anywhere near half
> the time the
> whole point of striping is that you use both drives at once.
Er, thanks, having designed RAID controllers, I do know how RAID works. And
yes, it IS near half. If you only have to move the actuator 1/2 the
time/amount for each disk for a particular file, well, that's half to
accomplish the same task.
> > > The reality for MTBF of a RAID-0 will lie in between.
> >
> > But that means it doesn't change compared to a single drive...
> >
> But each drive is dependant on the other so the reliability of
> the system is
> compromised by either drive failing.
Yes, but that doesn't mean the MTBF is lowered.
> I suggest
> you go get a
> school book and have a look.
That's the problem, I have intimate knowledge of the book, when it comes to
MTBF. If you really had any first hand knowledge of MTBF testing, you would
understand what I pointed out, not arguing against them. There is a LOT of
misunderstanding WRT what MTBF really means, and how it should be tested.
Do you have any REAL first hand experience with disk array MTBF? I do.
> The problem with the web is that anybody who thinks they
> understand jumps up
> and tells the world. Soon everybody beleives it.
That's true, but I don't base my knowledge on the web, I base it on my
experience on having designed SCSI controllers and disk subsystems, as well
as "being directly involved with" MTBF testing in the storage (and other)
areas for one of the largest computer manufacturers.
> I had a quick look for a reliable source and quickly noticed the weasel
> words on the disk manufacturers site which generally say this about raid-0
> "great for speed but if you have a problem you lose ALL of your
> data on all
> of the disks".
That's just a statement of fact, it's not weasel words. I am surprised you
harp on that point, when it's just simple understanding. It's not a big
deal. There's no conspiracy or lie...no one is trying to hide anything.
> So they admit the severity
No, they do NOT admit any "severity". It's just a simple statement of fact.
> > So I'll leave it to an authority on RAID who don't have their interests
in
> > disk in manufacturing.
Who can still make incorrect statements, which your reference is.
I though find it interesting. Above you claim anyone can just look
misinformation up on the web. Guess you're right.
I think I've had enough of this, you aren't providing any data, just
disagreement and speculation.
|