SPECsfs2008_nfs.v3 Result
Oracle Corporation
|
:
|
Sun ZFS Storage 7420
|
SPECsfs2008_nfs.v3
|
=
|
267928 Ops/Sec (Overall Response Time = 1.31 msec)
|
Performance
Throughput (ops/sec)
|
Response (msec)
|
12601
|
1.0
|
25221
|
0.8
|
37843
|
0.6
|
50466
|
0.7
|
63110
|
0.7
|
75801
|
0.7
|
88384
|
0.9
|
101131
|
1.0
|
113768
|
1.1
|
126414
|
1.1
|
139195
|
1.2
|
151962
|
1.3
|
164821
|
1.4
|
177424
|
1.5
|
189957
|
1.6
|
202943
|
1.7
|
215327
|
1.9
|
228602
|
1.9
|
241096
|
2.2
|
255978
|
2.6
|
267928
|
3.1
|
|
|
Product and Test Information
Tested By
|
Oracle Corporation
|
Product Name
|
Sun ZFS Storage 7420
|
Hardware Available
|
March 2012
|
Software Available
|
Dec 2011
|
Date Tested
|
March 2012
|
SFS License Number
|
6
|
Licensee Locations
|
Broomfield, CO USA
|
The Sun ZFS Storage 7420 is a high-performance storage system that offers enterprise-class NAS capabilities with industry-leading Oracle integration and storage efficiency, in a cost-effective high-availability configuration. The Sun ZFS Storage 7420's simplified set up and management combine with industry-leading storage analytics and a performance-optimized platform to make it the ideal solution for enterprise customers storage requirements. The Sun ZFS Storage 7420 can scale to 2 TB Memory, 80 CPU cores, and 1.7 PB capacity, with up to 4 TB of Flash Cache in a high-availability configuration.
Configuration Bill of Materials
Item No
|
Qty
|
Type
|
Vendor
|
Model/Name
|
Description
|
1
|
2
|
Storage Controller
|
Oracle
|
7420
|
Sun ZFS Storage 7420 Controller
|
2
|
4
|
10 Gigabit Ethernet Adapter
|
Oracle
|
Sun PCI-E Dual 10GbE Fiber
|
Dual port 10Gb Ethernet adapter
|
3
|
8
|
Short Wave Pluggable Transceiver
|
Oracle
|
10Gbps Short Wave Pluggable Transceiver (SFP+)
|
Short Wave Pluggable Transceiver
|
4
|
2
|
Shelf w/Disk Drives
|
Oracle
|
Sun Disk Shelf SAS-2 with 20 Disk Drives
|
SAS-2 Disk Shelf with 20x300GB 15K,HDD
|
5
|
10
|
Shelf w/Disk Drives
|
Oracle
|
Sun Disk Shelf SAS-2 with 24 Disk Drives
|
SAS-2 Disk Shelf with 24x300GB 15K,HDD
|
6
|
8
|
SSD Drive
|
Oracle
|
512GB Solid State Drive SATA-2
|
SSD Read Flash Accelerator 512GB
|
7
|
8
|
SSD Drive
|
Oracle
|
SAS-2 73GB 3.5-inch SSD Write Flash Accelerator
|
SSD Write Flash Accelerator 73GB
|
8
|
4
|
SAS-2 Host Bus Adapter
|
Oracle
|
SAS-2 8-Port 6Gbps HBA
|
SAS-2 8-Port 6Gbps Host Bus Adapter
|
Server Software
OS Name and Version
|
2011.1.1
|
Other Software
|
None
|
Filesystem Software
|
ZFS
|
Server Tuning
Name
|
Value
|
Description
|
atime
|
off
|
Disable atime updates (all file systems)
|
Server Tuning Notes
None
Disks and Filesystems
Description
|
Number of Disks
|
Usable Size
|
300GB SAS 15K RPM Disk Drives
|
280
|
37.1 TB
|
500GB SATA 7.2K RPM Disk Drives Each controller contains 2 of these
disk drives and are mirrored and are not used for cache data or data storage.
|
4
|
899.0 GB
|
Total
|
284
|
37.9 TB
|
Number of Filesystems
|
32
|
Total Exported Capacity
|
36.32 TB
|
Filesystem Type
|
ZFS
|
Filesystem Creation Options
|
default
|
Filesystem Config
|
32 ZFS file systems
|
Fileset Size
|
31008.5 GB
|
The storage configuration consists of 12 shelves, 10 with 24 disk
drives, 2 with 20 disk drives and 4 write flash devices. Each controller head
has 4 read flash accelerators. Each controller is configured to use 140 disk
drives, 4 write flash accelerator devices and 4 read flash accelerator devices.
The controller is then set up with pools by dividing the disk drives, write
flash accelerators and read flash accelerators into 4 pools. Each of the
controller's pools is configured with 34 disk drives, 1 write flash accelerator,
1 read flash accelerator and 1 spare disk drive. The pools are then set up to
mirror the data (RAID1) across all 34 drives. The write flash accelerator in
each pool is used for the ZFS Intent Log (ZIL) and the read
flash accelerator is used as a level 2 cache (L2ARC) for the pool. All pools are
configured with 4 ZFS file systems each. Since each controller is configured
with 4 pools and each pool contains 4 ZFS file system, in total each of 2
controllers has 16 ZFS file systems. The SUT has 32 ZFS file systems for the
benchmark. There are 2 internal mirrored system disk drives per controller and
are used only for the controllers core operating system. These drives are not
used for data cache or storing user data.
Network Configuration
Item No
|
Network Type
|
Number of Ports Used
|
Notes
|
1
|
10 Gigabit Ethernet
|
4
|
Each controller has 2 Dual port 10 Gigabit Ethernet
cards using only a single port on each and using Jumbo Frames
|
Network Configuration Notes
There are 8 ports total but 4 are active
at a time for high availibility. The MTU size is set to 9000 on each of the 10
Gigabit ports.
Benchmark Network
Each controller was configured to use a single port of two dual port
10GbE network adapters for the benchmark network. All of the 10GbE network
ports on all the load generators systems were connected to the Arista
7124SX switch that provided connectivity.
Processing Elements
Item No
|
Qty
|
Type
|
Description
|
Processing Function
|
1
|
8
|
CPU
|
2.4GHz 10-core Intel Xeon(tm) Processor E7-4870
|
NFS, ZFS, TCP/IP, RAID/Storage Drivers
|
Processing Element Notes
Each Sun ZFS Storage 7420 controller contains 4 physical processors,
each with 10 processing cores.
Memory
Description
|
Size in GB
|
Number of Instances
|
Total GB
|
Nonvolatile
|
7420 controller memory
|
1024
|
2
|
2048
|
V
|
SSD 73GB SAS-2 Write Flash Accelerator
|
73
|
8
|
584
|
NV
|
SSD 512GB SATA-2 Read Flash Accelerator
|
512
|
8
|
4096
|
NV
|
Grand Total Memory Gigabytes
|
|
|
6728
|
|
Memory Notes
The Sun ZFS Storage 7420 controllers' main memory is used for the
Adaptive Replacement Cache (ARC) the data cache and operating system memory. A
separate device, a write flash accelerator is used for the ZFS Intent Log
(ZIL). The read flash accelerator is dedicated storage used also to hold cached
data and is called the Level 2 Adaptive Replacement Cache (L2ARC).
Stable Storage
The Stable Storage requirement is guaranteed by the ZFS Intent Log
(ZIL) which logs writes and other file system changing transactions to either a
write flash accelerator or a disk drive. Writes and other file system changing
transactions are not acknowledged until the data is written to stable storage.
Since this is an active-active cluster high availability system, in the event of
a controller failing or power loss, the other active controller can take over
for the failed controller. Since the write flash accelerators or disk drives are
located in the disk shelves and can be accessed via the 4 backend SAS channels
from both controllers, the remaining active controller can complete any
outstanding transactions using the ZIL. In the event of power loss to both controllers, the ZIL is used after power is restored to reinstate any writes and other file system changes.
System Under Test Configuration Notes
The system under test is a Sun ZFS Storage 7420
cluster set up in an active-active configuration.
Other System Notes
Test Environment Bill of Materials
Item No
|
Qty
|
Vendor
|
Model/Name
|
Description
|
1
|
4
|
Oracle
|
Sun Fire X4270 M2
|
Sun Fire X4270 M2 with 144GB RAM and Oracle Solaris 10 8/11
|
2
|
1
|
Arista
|
7124SX
|
Arista 7124SX 24-port 10Gb Ethernet switch
|
Load Generators
LG Type Name
|
LG1
|
BOM Item #
|
1
|
Processor Name
|
Intel Xeon(tm) X5680
|
Processor Speed
|
3.3 GHz
|
Number of Processors (chips)
|
2
|
Number of Cores/Chip
|
6
|
Memory Size
|
144 GB
|
Operating System
|
Oracle Solaris 10 8/11
|
Network Type
|
10Gb Ethernet
|
Load Generator (LG) Configuration
Benchmark Parameters
Network Attached Storage Type
|
NFS V3
|
Number of Load Generators
|
4
|
Number of Processes per LG
|
256
|
Biod Max Read Setting
|
2
|
Biod Max Write Setting
|
2
|
Block Size
|
AUTO
|
Testbed Configuration
LG No
|
LG Type
|
Network
|
Target Filesystems
|
Notes
|
1..4
|
LG1
|
1
|
/export/sfs-1../export/sfs-32
|
None
|
Load Generator Configuration Notes
File systems were mounted on all clients and all were
connected to the same physical and logical network.
Uniform Access Rule Compliance
Every client used 256 processes. All 32 file systems are mounted and accessed by each client and evenly divided amongst all network paths to the 7420 controllers. The file systems data was evenly distributed on
the backend.
Other Notes
Config Diagrams
Generated on Wed Apr 18 10:40:36 2012 by SPECsfs2008 HTML Formatter
Copyright © 1997-2008 Standard Performance Evaluation Corporation
First published at SPEC.org on 17-Apr-2012