[Arm-netbook] EOMA server standard
peter green
plugwash at p10link.net
Thu Oct 25 22:03:13 BST 2012
>
> ok, right. i've been talking to some companies and the need for a
> standard which covers data centres - e.g. has 10Gigabit Ethernet - has
> come up.
>
The problem is afaict the two main companies who are trying to push arm
into the server space are calexeda and marvell and they are taking VERY
different approaches to ethernet.
Calexeda have put fabric switches ON their Energycore SoCs then created
a mesh of 10 gigabit links to interconnect them with no need for any
switches on the backplane.
Marvell OTOH have put four seperate SGMII (serialised gigabit medium
independent interface) connections on their Armada XP SoCs which it
appears they intend you to run at higher than standard clock speeds (2.5
GBps) and bond together to get a total of 10Gbps to the switch in the
backplane.
I just don't see how one backplane can reasonablly support both systems
without dramatically increasing the cost.
Meanwhile chips not designed explicitly for servers often seem to have
RGMII instead of SGMII because they aren't so constrained for pins.
However it does look like you can get RGMII to SGMII converters (
http://www.microsemi.com/ethernet-mii-converter )
Personally I think it may be too early to design a backplane standard
for arm servers and we may need to wait and see whether the bulk of the
market follows the marvell approach or the calexeda one. If you had to
choose one i'd probablly be inclined to go with the marvell one as at
least it can fairly easilly support non server-specific SoCs.
> for the pin-outs i figured that at least one 10GBase-T interface (8
> pins plus 8 GND spacers) would be acceptable,
Using 10GBASE-T to communicate with backplanes is crazy since it's power
hungry as hell even using 1000BASE-T is still pretty crazy. Remember the
SoCs don't support BASE-T directly they need seperate transciever chips.
More information about the arm-netbook
mailing list