[Arm-netbook] EOMA server standard
Gordan Bobic
gordan at bobich.net
Thu Oct 25 09:40:53 BST 2012
On 10/24/2012 06:46 PM, luke.leighton wrote:
> On Wed, Oct 24, 2012 at 2:50 PM, Gordan Bobic<gordan at bobich.net> wrote:
>> On 10/24/2012 02:14 PM, luke.leighton wrote:
>>> On Wed, Oct 24, 2012 at 1:23 PM, Gordan Bobic<gordan at bobich.net> wrote:
>>>
>>>> Possibly the best option might be a single 10G fibre port with
>>>> additional 4-5 copper gigabit ports for people who aren't set up for fibre.
>>>
>>> my only concern is: standards *have* to be based around
>>> "non-optional-itis". or, that whatever you put down it can have
>>> auto-negotiation.
>>
>> Why does the number of external ports on the chassis have to be
>> standardised? What is there to be gained from this?
>
> vendor-independent upgradeability.
>
>> Normal servers don't have a "standard" number of network ports.
>
> good point.
>
> ok, the thought-patterns for standardising on the ports stems from
> the ease with which it was possible to choose the ports for the "non"
> server EOMA standard. i say "ease" but you know what i mean. 8 wires
> for 1000 Eth, 4 for SATA, 8 for USB3, that's hardly rocket science,
> and it's more than adequate.
Hmm... OK, I think I see where this is going. There are 2 sides of the
problem for a server chassis, at least in the case of a "blade" server
with potentially many EOMA cards in it (is there really a point in
having a single EOMA card server?).
First part is how the interconnect works internally - every EOMA card
connects to the switch inside the server. That could be 1G, or it could
be 10, if the internals can be made to fit inside the power envelope
(bear in mind that if it takes 5W per 10G port, and you have 50-ish EOMA
cards in a 1U chassis, the power dissipation is going to be a serious
issue).
The _external_ interconnect is a separate issue, as this wouldn't be
connected to any one EOMA card, it'd be connected to the same switch
fabric that the cards would be connected to. Making this 10G capable
makes much more sense than for the internal interconnect, but there are
issues of cost and flexibility, so having pluggable SFP modules to
convert it to whatever is needed would make sense. Some people might
prefer a single 10G fibre port, others a 1G copper port. I would say
that multiple 1G ports with trunking support would be handy since 10G
switches are still extremely expensive, and it also remains to be seen
what the switch fabric for the chassis will end up costing, too. On the
cheap-ish you might be able to find something that'll do a single 10G
uplink with all other ports being 1G.
>>> this is why i was looking at 10GBase-T because you can use the
>>> *exact* same 8 wires for 10/100/1000/10000 ethernet, it's all
>>> auto-negotiated.
>>
>> Except the network port will suck up 5x more Watts than the whole of the
>> rest of the card put together. I really don't see it as particularly
>> desirable. :)
>>
>>> setting a lowest-common-denominator standard (where its sub-standards
>>> all have auto-negotiation), in a market that's about to expand, is
>>> very very tricky!
>>>
>>> this needs a lot of careful thought, or just to say "nuts to it" and
>>> set a bar and tough luck for any SoCs that are below that level.
>>
>> Since there are no SoCs that can do 10G on the chip, it's pretty
>> academic.
>
> it's not - the calxeda ECX-1000 can:
> http://www.calxeda.com/technology/products/processors/ecx-1000-techspecs/
>
> with an 80gbyte/sec internal crossbar to DMA, and a 72-bit-wide DDR3
> memory bus, it's a pretty shit-hot CPU in under 5 watts. XAUI is i
> believe a standard interconnect: multiplexing-wise it can have up to 5
> XAUI ports, up to 3 10G-Eth, etc. etc.
Yes but that doesn't mean it's particularly realistic to expect to get
anywhere near the 10Gb saturation in any realistic use-case. I'd believe
it when I see it, but on a typical LAMP stack type use case, you'd be
lucky to achieve 1/100 of that - there just isn't enough CPU. I'm not
even sure you'd reach 10Gb of bandwidth dd-ing a file from tmpfs to
/dev/null let alone some more substantial workload.
Gordan
More information about the arm-netbook
mailing list