From: Pasi Kärkkäinen on
On Tue, Jun 22, 2010 at 09:24:21AM -0700, Jiahua wrote:
> Maybe a native question, but why need 50 targets? Each target can only
> serve about 25K IOPS? A single ramdisk should be able to handle this.
> Where is the bottleneck?
>

That's a good question.. dunno. Maybe StarWind iSCSI target didn't scale very well? :)

Or maybe it's related to the multi-queue support on the initiator NIC,
to scale the load to multiple queues and thus to multiple IRQs and to multiple CPU cores..

so maybe they needed multiple IP addresses to do that and it was easiest to just use
multiple target systems?


> We had a similar experiment but with Infiniband and Lustre. It turn
> out Lustre has a rate limit in the RPC handling layer. Is it the same
> problem here?
>

Note that we're trying to benchmark the *initiator* here, not the targets..

-- Pasi

> Jiahua
>
>
>
> On Tue, Jun 22, 2010 at 6:44 AM, Pasi K�rkk�inen <pasik(a)iki.fi> wrote:
> > Hello,
> >
> > Recently Intel and Microsoft demonstrated pushing over 1.25 million IOPS using software iSCSI and a single 10 Gbit NIC:
> > http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million
> >
> > Earlier they achieved one (1.0) million IOPS:
> > http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
> > http://communities.intel.com/community/openportit/server/blog/2010/01/19/1000000-iops-with-iscsi--thats-not-a-typo
> >
> > The benchmark setup explained:
> > http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained
> > http://dlbmodigital.microsoft.com/ppt/TN-100114-JSchwartz_SMorgan_JPlawner-1032432956-FINAL.pdf
> >
> >
> > So the question is.. does someone have enough new hardware to try this with Linux?
> > Can Linux scale to over 1 million IO operations per second?
> >
> >
> > Intel and Microsoft used the following for the benchmark:
> >
> > � � � �- Single Windows 2008 R2 system with Intel Xeon 5600 series CPU,
> > � � � � �single-port Intel 82599 10 Gbit NIC and MS software-iSCSI initiator
> > � � � � �connecting to 50x iSCSI LUNs.
> > � � � �- IOmeter to benchmark all the 50x iSCSI LUNs concurrently.
> >
> > � � � �- 10 servers as iSCSI targets, each having 5x ramdisk LUNs, total of 50x ramdisk LUNs.
> > � � � �- iSCSI target server also used 10 Gbit NICs, and StarWind iSCSI target software.
> > � � � �- Cisco 10 Gbit switch (Nexus) connecting the servers.
> >
> > � � � �- For the 1.25 million IOPS result they used 512 bytes/IO benchmark, outstanding IOs=20.
> > � � � �- No jumbo frames, just the standard MTU=1500.
> >
> > They used many LUNs so they can scale the iSCSI connections to multiple CPU cores
> > using RSS (Receive Side Scaling) and MSI-X interrupts.
> >
> > So.. Who wants to try this? :) I don't unfortunately have 11x extra computers with 10 Gbit NICs atm to try it myself..
> >
> > This test covers networking, block layer, and software iSCSI initiator..
> > so it would be a nice to see if we find any bottlenecks from current Linux kernel.
> >
> > Comments please!
> >
> > -- Pasi
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo(a)vger.kernel.org
> > More majordomo info at �http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at �http://www.tux.org/lkml/
> >
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Pasi Kärkkäinen on
Hello,

How about numbers using other transports? FC? Has someone done benchmarks recently?

-- Pasi

On Tue, Jun 22, 2010 at 04:44:10PM +0300, Pasi K�rkk�inen wrote:
> Hello,
>
> Recently Intel and Microsoft demonstrated pushing over 1.25 million IOPS using software iSCSI and a single 10 Gbit NIC:
> http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million
>
> Earlier they achieved one (1.0) million IOPS:
> http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
> http://communities.intel.com/community/openportit/server/blog/2010/01/19/1000000-iops-with-iscsi--thats-not-a-typo
>
> The benchmark setup explained:
> http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained
> http://dlbmodigital.microsoft.com/ppt/TN-100114-JSchwartz_SMorgan_JPlawner-1032432956-FINAL.pdf
>
>
> So the question is.. does someone have enough new hardware to try this with Linux?
> Can Linux scale to over 1 million IO operations per second?
>
>
> Intel and Microsoft used the following for the benchmark:
>
> - Single Windows 2008 R2 system with Intel Xeon 5600 series CPU,
> single-port Intel 82599 10 Gbit NIC and MS software-iSCSI initiator
> connecting to 50x iSCSI LUNs.
> - IOmeter to benchmark all the 50x iSCSI LUNs concurrently.
>
> - 10 servers as iSCSI targets, each having 5x ramdisk LUNs, total of 50x ramdisk LUNs.
> - iSCSI target server also used 10 Gbit NICs, and StarWind iSCSI target software.
> - Cisco 10 Gbit switch (Nexus) connecting the servers.
>
> - For the 1.25 million IOPS result they used 512 bytes/IO benchmark, outstanding IOs=20.
> - No jumbo frames, just the standard MTU=1500.
>
> They used many LUNs so they can scale the iSCSI connections to multiple CPU cores
> using RSS (Receive Side Scaling) and MSI-X interrupts.
>
> So.. Who wants to try this? :) I don't unfortunately have 11x extra computers with 10 Gbit NICs atm to try it myself..
>
> This test covers networking, block layer, and software iSCSI initiator..
> so it would be a nice to see if we find any bottlenecks from current Linux kernel.
>
> Comments please!
>
> -- Pasi
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo(a)vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/