From: Robert Myers on
On Jan 4, 11:23 pm, Del Cecchi` <dcecchinos...(a)att.net> wrote:

> How is your bisection bandwidth calculation affected by the reasonable
> amount of per node memory on Blue Gene?  As I understand it, the current
> BG/P node has 4 cores and 4GB of memory.

The memory is too small to be interesting with the number of nodes you
can usefully use. If you just wanted to use nodes essentially for
memory and live with the communication efficiency, you could do some
interesting calculations with the bigger incarnations of Blue Gene.

I'm looking forward to seeing what Blue Waters turns out to be. The
guy from NCSA (document I directed Mayan to) is making all the right
noises. It's discouraging that he presents a chart from NERSC to
characterize who uses big computers for what. I hope that's not an
indication that the heavy hand of the DoE is still on the scale.

Robert.
From: Stephen Fuld on
nmm1(a)cam.ac.uk wrote:

snip

> Incidentally, I should be curious to know how much of InfiniBand
> hardware manual became generally available. It is a long time since
> I looked at the software one, but my recollection/impression is that
> no more than 25% of it has become generally available. If that. As
> I said, perhaps half of SCSI isn't generally available, though few
> people realise what it theoretically provides. But each InfiniBand
> manual is c. 1,000 pages, and SCSI is only a few hundred in all.

That seems a little unfair. The "complete" SCSI spec is far larger
than a few hundred pages. If you look at

http://www.t10.org/scsi-3.htm

you will see that the spec consists of many documents that specify
different aspects of the spec. The "upper" part of the diagram shows
the parts of the spec that pertain to different types of targets. I
don't know of any vendor that implements all of the different parts,
primarily because no vendor produces all of the types of targets that
would require implementation of all the parts. i.e. disk drives, tape
drives, printers, automation devices, enclosures, etc. But I agree that
even within that restriction, there are commands that have not been
implemented (such as block search), primarily because of lack of demand.

Similarly, the bottom part of the diagram shows how SCSI is mapped to a
whole variety of transports. I think most of these have been
implemented by someone, though some are obsolete now. But even here,
many of the parts don't specify the physical aspects of the connection
(i.e. connector size, etc) as they are specified in the relevant
transport spec that is not part of SCSI.

The Infiniband spec is similar in that it specifies a lot of different
things, even more than the SCSI spec does. So naturally it is large.
It specifies everything from the physical size of the connectors to the
management protocol (i.e. sort of the equivalent of SNMP) But much of
it can be ignored by most people (of course different parts for each
person). For example, an application programmer really doesn't care
about the shape of the physical connectors, and a cable manufacturer
doesn't care about the the details of RDMA.

As for how much has been implemented, I suspect that a lot of it hasn't,
to some degree because some people's vision of Infiniband as a large,
site wide (or even multi-site) network never happened. As a result a lot
of the management stuff has not been in demand. Similarly, some of the
options were just not required as the market chose other options.

So, in summary, both specs are large primarily to the breadth of things
they specify, but much of each can be ignored by most users. And, with
any such comprehensive spec, some of the predictions of the original
standards group just didn't pan out. But I don't see that as a major
problem.


--
- Stephen Fuld
(e-mail address disguised to prevent spam)
From: nmm1 on
In article <hi2jo7$v67$1(a)news.eternal-september.org>,
Stephen Fuld <SFuld(a)Alumni.cmu.edu.invalid> wrote:
>
>> Incidentally, I should be curious to know how much of InfiniBand
>> hardware manual became generally available. It is a long time since
>> I looked at the software one, but my recollection/impression is that
>> no more than 25% of it has become generally available. If that. As
>> I said, perhaps half of SCSI isn't generally available, though few
>> people realise what it theoretically provides. But each InfiniBand
>
>That seems a little unfair. The "complete" SCSI spec is far larger
>than a few hundred pages. If you look at
>
>http://www.t10.org/scsi-3.htm

My mistake. I haven't been keeping up, and was referring to SCSI-2.
I am not surprised that SCSI is suffering from bloat, due to a
surfeit of pressure groups.

>you will see that the spec consists of many documents that specify
>different aspects of the spec. The "upper" part of the diagram shows
>the parts of the spec that pertain to different types of targets. I
>don't know of any vendor that implements all of the different parts,
>primarily because no vendor produces all of the types of targets that
>would require implementation of all the parts. i.e. disk drives, tape
>drives, printers, automation devices, enclosures, etc. But I agree that
>even within that restriction, there are commands that have not been
>implemented (such as block search), primarily because of lack of demand.

Not just that. I said 'generally available' - for example, I don't
know of anyone who has used SCSI for computer-computer links, and
doubt that aspect gets implemented very often. I am excluding one-off
versions by Arcane Technologies Inc., Loose Screw, California, from
which anything can be expected (much as with Hawking's black holes).


Regards,
Nick Maclaren.
From: Stephen Fuld on
Del Cecchi` wrote:
> Anne & Lynn Wheeler wrote:
>> Stephen Fuld <SFuld(a)alumni.cmu.edu.invalid> writes:
>>
>>> Do you want to know the history of Infiniband or some details of what
>>> it was designed to do (and mostly does)?
>>
>>
>> minor reference to SCI (being implementable subset of FutureBus)
>> http://en.wikipedia.org/wiki/Scalable_Coherent_Interface
>>
>> eventually morphing into current InfiniBand
>> http://en.wikipedia.org/wiki/InfiniBand
>>
>
> I don't recall any morphing at all from SCI to IB. And I was involved
> in both. For openers SCI was source synchronous parallel and ib is byte
> serial. SCI is coherent, IB is not.

I agree with Del here. I don't remember SCI as being a part of it, as
SCI was clearly intended as a coherent bus among multiple processors,
whereas IB was aimed at things like clusters, but as much as that, an
I/O interface to disks, etc. SCI was never targeted at that market.

IB was, as the Wikipedia article states, a "combination" of Intel's NGIO
and IBM/Compaq's Future I/O. There is a fairly long story here, that I
have related before, but let's just say that internal (within groups in
Intel) politics and different visions of the requirements between the
two groups led to the situation we have today - that is, a niche
solution(fairly expensive), instead of a ubiquitous (and low cost)one.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)
From: "Andy "Krazy" Glew" on
> Del Cecchi` wrote:
>> ... if you were astute enough to use InfiniBand interconnect. :-)
>>
>> you can lead a horse to water but you can't make him
>>give up ethernet.

> Andy "Krazy" Glew wrote:
>
> What's the story on Infiniband?

Do you want to know the history of Infiniband or some details of what it
was designed to do (and mostly does)?

>>> Stephen Fuld <SFuld(a)alumni.cmu.edu.invalid> writes:
>>>
>>>> Do you want to know the history of Infiniband or some details of what
>>>> it was designed to do (and mostly does)?
....
> IB was, as the Wikipedia article states, a "combination" of Intel's NGIO
> and IBM/Compaq's Future I/O. There is a fairly long story here, that I
> have related before, but let's just say that internal (within groups in
> Intel) politics and different visions of the requirements between the
> two groups led to the situation we have today - that is, a niche
> solution(fairly expensive), instead of a ubiquitous (and low cost)one.


This last is mainly what I was asking with "What's the story on Infiniband?"

First, my impression is that Infiniband is basically a niche solution.
Mainly used in HPC, expensive systems. Right?

And the trend is not good. It doesn't look like it will become
ubiquitous. Right?

In fact, I guess I am wondering when IB will die out. Any guesses?

I'm somewhat interested in why this happened.

--

From where I sit, Infiniband looks a lot like IPI, the Intelligent
Peripheral Interface, which succumbed to SCSI. Some f the same
features, too.

--

It's been a while since I looked at Infiniband, but... while it does
have some neat features - scatter/gather per Del, the following from
wikipedia

* a direct memory access read from or, write to, a remote node (RDMA)
* a channel send or receive
* a transaction-based operation (that can be reversed)
* a multicast transmission.
* an atomic operation

My complaint is also said by wikipedia: InfiniBand has no standard
programming interface."

In particular, many of the above good idea features make more sense if
they were some reasonably low latency way of accessing them from user
space. Which there is... in some expensive controllers. IMHO they need
to be bound into the instruction set, if necessary by microcode
accessing said controllers. Along with implementations that can work on
mass market systems without Infiniband.

So long as these ideas remained (1) Infiniband only, (2) only certain
systems within Infiniband, (3) accessible only through OS device drivers
or (4) if usr accessible, decidely idiosyncratic, often only working for
pinned memory or processes - so long as these problems persist,
Infiniband was predestined for a niche in embedded systems. Remember: I
call HPC supercomputers embedded systems. In fact, I consider anything
that requires pinning memory to be embedded. Pinning memory is the kiss
of death for general purpose I/O, not because it's a bad idea, but
because it is unreasonable to pin every process, and if you can't use it
from every process, it isn't ubiquitous.

Hmm. "Pinning is the kiss of death". This has just been added to my
wiki page on Bad, Good, and Middling Ideas in Computer Architecture.
Although maybe I also need I page of mottoes.

http://semipublic.comp-arch.net/wiki/index.php?title=Bad,_Good,_and_Middling_Ideas_in_Computer_Architecture
http://semipublic.comp-arch.net/wiki/index.php?title=Pinning_is_the_Kiss_of_Death
http://semipublic.comp-arch.net/wiki/index.php?title=Mottoes_and_Slogans_in_Computer_Architecture

(Not much on my wiki yet, but I'm starting.)
First  |  Prev  |  Next  |  Last
Pages: 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Prev: PEEEEEEP
Next: Texture units as a general function