User Tools

Site Tools


system_description

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
system_description [2018/10/12 21:03] – created deadlinesystem_description [2019/09/04 20:32] (current) – updated switch info deadline
Line 1: Line 1:
-The follow diagram is a how the cluster is configured in the classroom+===== System Description ===== 
 +The follow diagram is a how the cluster is configured in the classroom.
  
-{{:cluster-diagram.png?400|}}+{{ wiki:cluster-diagram-stp.png?500 |The Data Science Lab Cluster}} 
 + 
 +==== Combined Cluster Specification ==== 
 + 
 +Total Number of Nodes (Motherboards): 30\\ 
 +Total Number of Cores: 144\\ 
 +Physical Cores Per Motherboard: 4\\ 
 +Number of Physical Cores: 120\\ 
 +Number of Hypethread Cores: 24\\ 
 +Total Memory: 655.76 GB Mem\\ 
 +Network 1: 1 Gigabit Ethernet\\ 
 +Network 2: 10 Gigabit Ethernet 
 +Total SSD Storage: 10 TB\\ 
 +Total HDD Storage (raw): 52 TB 
 + 
 + 
 +==== Hadoop Nodes: ==== 
 + 
 +Nodes limulus2-limulus6 are Basement Supercomputing Model 300 The Limulus™ Personal Hadoop workstations. There are four Micro-ATX motherboards in each Limulus system. One motherboard is used for the workstation login and the other three are used as compute nodes. The user facing login motherboard has a single Intel i7-6700 CPU @ 3.40GHz with 32 GBytes of memory. It also has an SSD (500G) drive and two spinning HDD (2x4TB as RAID1). The worker nodes have a i5-6600 CPU @3.30GHz, 16 GBytes of memory, one SSD (500 GB) and a 60 GB M.2 drive for system software. The SSD drives on all four nodes are used for Hadoop HDFS storage. There are two Ethernet networks a 1 GbE and a 10 GbE. CentOS v6 (A Red Hat rebuild) and Hortonworks HDP are installed on all nodes.  
 + 
 +The administrative node (now called dsl (previously called limulus) is the administrative node. It is identical to the other Hadoop nodes with the exception of two 6 TB HDD disks in Raid1 configuration.  
 + 
 +==== TensorFlow Nodes ==== 
 + 
 +TBC 
 + 
 +==== Cluster NAS (Not working) ==== 
 + 
 +The Cluster NAS is a 40 TByte storage device that can be accessed on every user node under 
 +<code> 
 +/mnt/bigdata 
 +</code> 
 + 
 +==== Ethernet Switches ==== 
 + 
 +There are two Ethernet switches. A Dell X4012 used for the 10-GbE network and an HP 1420-16G used for the 1 GbE network. 
 + 
 +==== Cluster UPS ==== 
 + 
 +TBC
system_description.1539378188.txt.gz · Last modified: 2018/10/12 21:03 by deadline

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki