I just need to clear a few things up.
We have been using Infiniband with Xenserver (XS) since v6.0 below are our findings:
OFED1.5.4 compiles and runs on XS v6.0. IPoIB is the only functioning transport mechanism. SDP, SRP and iSER DO NOT WORK. They are naming conflicts between the OFED stack and Xen kernel which prevent the SRP modules from loading, and whilst the iSER module loads, when you attempt to create a connection using iSER as transport it crashes.
This is due to a bug in the kernel tree that was introduced in v6 and not rectified (as infiniband is outside vendor support for Xenserver), I note that iSER reportedly was working correctly in XS 5.6.
In XS 6.1 OFED fails to compile due to duplication of some naming issues in the network stack, we managed to hack 1.5.4 to compile. I note this is when Mellanox introduced OFED 2.0, however this only supports the later model cards, so if you run 10Gb infiniband product, this won't work. Once again IPoIB is the only transport that works.
In XS 6.2, our hacked version of OFED 1.5.4 compiles fine, however OFED 2.0 breaks, I believe there is a method within the 2.0 packaging to compile against a new kernel, however we did not try this.
We have been mulling running dom0 passthrough and setting up storage on a virtual machine, however this is currently a hack and is not past alpha level yet.
In summary, It would be really nice for Mellanox to work with the new (and improved) Xen opensource project to get infiniband drivers into the kernel. It makes sense for the best storage transport to function in the most deployed Hypervisor.