User Tools

Site Tools


system_description

System Description

The follow diagram is a how the cluster is configured in the classroom.

The Data Science Lab Cluster

Combined Cluster Specification

Total Number of Nodes (Motherboards): 30
Total Number of Cores: 144
Physical Cores Per Motherboard: 4
Number of Physical Cores: 120
Number of Hypethread Cores: 24
Total Memory: 655.76 GB Mem
Network 1: 1 Gigabit Ethernet
Network 2: 10 Gigabit Ethernet Total SSD Storage: 10 TB
Total HDD Storage (raw): 52 TB

Hadoop Nodes:

Nodes limulus2-limulus6 are Basement Supercomputing Model 300 The Limulus™ Personal Hadoop workstations. There are four Micro-ATX motherboards in each Limulus system. One motherboard is used for the workstation login and the other three are used as compute nodes. The user facing login motherboard has a single Intel i7-6700 CPU @ 3.40GHz with 32 GBytes of memory. It also has an SSD (500G) drive and two spinning HDD (2x4TB as RAID1). The worker nodes have a i5-6600 CPU @3.30GHz, 16 GBytes of memory, one SSD (500 GB) and a 60 GB M.2 drive for system software. The SSD drives on all four nodes are used for Hadoop HDFS storage. There are two Ethernet networks a 1 GbE and a 10 GbE. CentOS v6 (A Red Hat rebuild) and Hortonworks HDP are installed on all nodes.

The administrative node (now called dsl (previously called limulus) is the administrative node. It is identical to the other Hadoop nodes with the exception of two 6 TB HDD disks in Raid1 configuration.

TensorFlow Nodes

TBC

Cluster NAS (Not working)

The Cluster NAS is a 40 TByte storage device that can be accessed on every user node under

/mnt/bigdata

Ethernet Switches

There are two Ethernet switches. A Dell X4012 used for the 10-GbE network and an HP 1420-16G used for the 1 GbE network.

Cluster UPS

TBC

system_description.txt · Last modified: 2019/09/04 20:32 by deadline

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki