User Tools

Site Tools


system_description

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
system_description [2018/11/02 01:21] – Added sytem descriptions deadlinesystem_description [2019/09/04 20:32] (current) – updated switch info deadline
Line 1: Line 1:
-The follow diagram is a how the cluster is configured in the classroom+===== System Description ===== 
 +The follow diagram is a how the cluster is configured in the classroom.
  
-{{:cluster-diagram.png?400|The Data Science Lab Cluster}}+{{ wiki:cluster-diagram-stp.png?500 |The Data Science Lab Cluster}}
  
-**Combined Cluster Specification**+==== Combined Cluster Specification ====
  
 Total Number of Nodes (Motherboards): 30\\ Total Number of Nodes (Motherboards): 30\\
Line 17: Line 18:
  
  
-**Hadoop Nodes:**+==== Hadoop Nodes: ====
  
 Nodes limulus2-limulus6 are Basement Supercomputing Model 300 The Limulus™ Personal Hadoop workstations. There are four Micro-ATX motherboards in each Limulus system. One motherboard is used for the workstation login and the other three are used as compute nodes. The user facing login motherboard has a single Intel i7-6700 CPU @ 3.40GHz with 32 GBytes of memory. It also has an SSD (500G) drive and two spinning HDD (2x4TB as RAID1). The worker nodes have a i5-6600 CPU @3.30GHz, 16 GBytes of memory, one SSD (500 GB) and a 60 GB M.2 drive for system software. The SSD drives on all four nodes are used for Hadoop HDFS storage. There are two Ethernet networks a 1 GbE and a 10 GbE. CentOS v6 (A Red Hat rebuild) and Hortonworks HDP are installed on all nodes.  Nodes limulus2-limulus6 are Basement Supercomputing Model 300 The Limulus™ Personal Hadoop workstations. There are four Micro-ATX motherboards in each Limulus system. One motherboard is used for the workstation login and the other three are used as compute nodes. The user facing login motherboard has a single Intel i7-6700 CPU @ 3.40GHz with 32 GBytes of memory. It also has an SSD (500G) drive and two spinning HDD (2x4TB as RAID1). The worker nodes have a i5-6600 CPU @3.30GHz, 16 GBytes of memory, one SSD (500 GB) and a 60 GB M.2 drive for system software. The SSD drives on all four nodes are used for Hadoop HDFS storage. There are two Ethernet networks a 1 GbE and a 10 GbE. CentOS v6 (A Red Hat rebuild) and Hortonworks HDP are installed on all nodes. 
Line 23: Line 24:
 The administrative node (now called dsl (previously called limulus) is the administrative node. It is identical to the other Hadoop nodes with the exception of two 6 TB HDD disks in Raid1 configuration.  The administrative node (now called dsl (previously called limulus) is the administrative node. It is identical to the other Hadoop nodes with the exception of two 6 TB HDD disks in Raid1 configuration. 
  
-**TensorFlow Nodes**+==== TensorFlow Nodes ====
  
 TBC TBC
  
-**Cluster NAS**+==== Cluster NAS (Not working) ====
  
 The Cluster NAS is a 40 TByte storage device that can be accessed on every user node under The Cluster NAS is a 40 TByte storage device that can be accessed on every user node under
Line 34: Line 35:
 </code> </code>
  
-**Ethernet Switches**+==== Ethernet Switches ====
  
-TBC+There are two Ethernet switches. A Dell X4012 used for the 10-GbE network and an HP 1420-16G used for the 1 GbE network.
  
-**Cluster UPS**+==== Cluster UPS ====
  
 TBC TBC
system_description.1541121688.txt.gz · Last modified: 2018/11/02 01:21 by deadline

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki