SPECstorage™ Solution 2020_genomics Result

Copyright © 2016-2022 Standard Performance Evaluation Corporation

UBIX TECHNOLOGY CO., LTD. SPECstorage Solution 2020_genomics = 1680 Jobs
UbiPower 18000 distributed all-flash storage system Overall Response Time = 0.25 msec


Performance

Business
Metric
(Jobs)
Average
Latency
(msec)
Jobs
Ops/Sec
Jobs
MB/Sec
1400.17114000311888
2800.17928000623788
4200.18642001035673
5600.19556001247567
7000.20470001559468
8400.21484002171342
9800.23198002583231
11200.248112002995124
12600.2651260031107018
14000.2851400036118910
15400.3041540040130793
16800.7031680045142695
Performance Graph


Product and Test Information

UbiPower 18000 distributed all-flash storage system
Tested byUBIX TECHNOLOGY CO., LTD.
Hardware AvailableJuly 2022
Software AvailableJuly 2022
Date TestedSeptember 2022
License Number6513
Licensee LocationsShenzhen, China

UbiPower 18000 is a new generation of ultra high performance distributed all-flash storage system, dedicated to providing high-performance data services for HPC/HPDA business, including AI and machine learning, genomics sequencing, EDA, CAD/CAE, real-time analytics, media rendering etc. UbiPower 18000 combines high-performance hyperscale NVMe SSD and Storage Class Memory with storage services all connected over RDMA networks to create low-latency, high-throughput, scale-out architecture.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
114Storage NodeUBIXUbiPower 18000 High-Performance X NodeUbiPower 18000 High-Performance X Node, including 16 slots for U.2 drives.
228Client ServerIntelM50CYP2UR208The M50CYP2UR208 is 2U 2-socket rack server. The CPU is two Intel 32-Core [email protected] GHz. 512 GiB of system memory. Each server has 1x Mellanox ConnectX-5 100GbE dual-port network card. 1 server used as Prime Client; the other 27 servers used to generate the workload including Prime Client.
356100GbE CardMellanoxMCX516A-CDATConnectX-5 Ex EN network interface card, 100GbE dual-port QSFP28, PCIe Gen 4.0 x16.
4224SSDSamsungPM9A31.92TB NVMe SSD
52SwitchHuawei8850-64CQ-EICloudEngine 8850 delivers high performance, high port density, and low latency for cloud-oriented data center networks and high-end campus networks. It supports 64 x 100 GE QSFP28 ports.

Configuration Diagrams

  1. Network Diagrams

Component Software

Item NoComponentTypeName and VersionDescription
1ClientsClient OSCentos 7.9Operating System (OS) for clients in M50CYP2UR208.
2Storage NodeStorage OSUbiPower OS 1.1.0Storage Operating System

Hardware Configuration and Tuning - Physical

Storage Server
Parameter NameValueDescription
Port Speed100GbEach storage node has 4x 100GbE Ethernet ports connected to the switch.
MTU4200Jumbo Frames configured for 100Gb ports.
Clients
Parameter NameValueDescription
Port Speed100GbEach storage node has 2x 100GbE Ethernet ports connected to the switch.
MTU4200Jumbo Frames configured for 100Gb ports.

Hardware Configuration and Tuning Notes

None

Software Configuration and Tuning - Physical

Clients
Parameter NameValueDescription
bondbond2Bond2 for Storage node 2x 100GbE interfaces of each network card. Bond2 algorithm is balance-xor. The UbiPower will configure the bonding of the storage nodes automatically.

Software Configuration and Tuning Notes

The single filesystem was attached via a single mount per client. The mount string used was "mount -t ubipfs /pool/spec-fs /mnt/spec-test"

Service SLA Notes

None

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1Samsung PM9A3 1.92TB used for UbiPower 18000 Storage System8+2Yes224
2Micron 480GB ssd for Storage Nodes and Clients to store and boot OS RAID-1Yes84
Number of Filesystems1
Total Capacity308TiB
Filesystem Typeubipfs

Filesystem Creation Notes

Each storage node has 16x Samsung PM9A3 SSDs attached to it, which are dedicated to the UbiPower filesystem. The single filesystem consumed all of SSDs across all of the nodes.

Storage and Filesystem Notes

UbiPower filesystem was created and distributed evenly across all 14 storage nodes with 8+2 EC configuration in the cluster. All data and metadata are distributed evenly across the 14 storage nodes.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
1100GbE NetWork56Each client is connected to a single port of each switch
2100GbE NetWork56Each storage node is connected to two ports of each switch

Transport Configuration Notes

For each client server, two 100GbE interfaces are bonded to one port, with MTU size 4200. For each storage node, two 100GbE interfaces of each network card are bonded to one port, with MTU size 4200. PFC and ECN are configured between switches, client servers and storage nodes.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1CloudEngine 8850-64CQ-EI100GbE1281282x CloudEngine switches connected together with an 800Gb LAG which used 8 ports of each switch. Each switch has 56 ports used for client connections and 56 ports used for storage node connections.

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
156CPUClient ServerIntel Xeon Gold 6338 [email protected]UbiPower Storage Client, Linux OS, Load Generator and device driver
228CPUstorage NodeIntel Xeon Gold 6338 [email protected]UbiPower Storage OS

Processing Element Notes

None

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
28x client servers with 512GB51228V14336
14x storage nodes with 512GB51214V7168
14x storage nodes with 2048GB of storage class memory204814NV28672
Grand Total Memory Gibibytes50176

Memory Notes

Each storage controller has main memory that is used for the operating system and caching filesystem read data. Each storage node also has storage class memory; See "Stable Storage" for more information.

Stable Storage

In UbiPower 18000, all writes are committed directly to the nonvolatile storage class memory before being written to the NVMe SSD. All data are protected by UbiPower OS Distributed Erase Coding Protection (8+2 in this test) across the storage nodes in the cluster. In the case of storage class memory failure, data is no longer written to the storage class memory, but is written to NVMe SSD in a write-through way.

Solution Under Test Configuration Notes

None

Other Solution Notes

None

Dataflow

The 28 client servers are the load generators for the benchmark. Each load generator has access to the single namespace of UbiPower filesystem. The benchmark tool accesses a single mount point on each load generator. In turn each of mount point corresponds to a single shared base directory in the filesystem. The clients process the file operations, and the data requests to and from the 14 UbiPower Storage nodes.

Other Notes

None

Other Report Notes

None


Generated on Wed Sep 28 16:43:47 2022 by SpecReport
Copyright © 2016-2022 Standard Performance Evaluation Corporation