Infrastruttura GRID di produzione e i T2 Cristina Vistoli Cnaf.
Soluzioni HW per il Tier 1 al CNAF Luca dell’Agnello Stefano Zani (INFN – CNAF, Italy) III CCR...
-
Upload
ethelbert-davidson -
Category
Documents
-
view
219 -
download
0
Transcript of Soluzioni HW per il Tier 1 al CNAF Luca dell’Agnello Stefano Zani (INFN – CNAF, Italy) III CCR...
Soluzioni HW per il Tier 1 al CNAFSoluzioni HW per il Tier 1 al CNAF
Luca dell’Agnello Stefano Zani
(INFN – CNAF, Italy)
III CCR Workshop May 24-27 2004
Tier1
INFN computing facility for HEP community Ending prototype phase last year, now fully operational Location: INFN-CNAF, Bologna (Italy)
o One of the main nodes on GARR network Personnel: ~ 10 FTE’s
o ~ 3 FTE's dedicated to experiments Multi-experiment
LHC experiments(Alice, Atlas, CMS, LHCb), Virgo, CDF, BABAR, AMS, MAGIC, ...
Resources dynamically assigned to experiments according to their needs
50% of the Italian resource for LCG Participation to experiments data challenge Integrated with Italian Grid Resources accessible also in traditional way
Logistics
Recently moved to a new location (last January) Hall in the basement (-2nd floor) ~ 1000 m2 of total space
o Computing Nodes o Storage Deviceso Electric Power System (UPS)o Cooling and Air conditioning systemo Garr GPop
Easily accessible with lorries from the road Not suitable for office use (remote control needed)
Electric Power
Electric Power Generator 1250 KVA (~ 1000 KW)
up to 160 racks Uninterruptible Power Supply (UPS)
Located into a separate room (conditioned and ventilated) 800 KVA (~ 640 KW)
380 V three-phase distributed to all racks (Blindo) Rack power controls output 3 independent 220 V lines for
computers Rack power controls sustain burden up to 16 or 32 A
o 32 A power controls needed for Xeon 36 bi-processors racks 3 APC power distribution modules (24 outlets each)
o Completely programmable (allows gradual servers switching on)o Remotely manageable via web
380 V three-phase for other devices (tape libraries, air conditioning, etc…)
Cooling & Air Conditioning
RLS (Airwell) on the roof ~ 700 KW Water cooling Need “booster pump” (20 mts T1 roof) Noise insulation
1 Air Conditioning Unit (uses 20% of RLS refreshing power and controls humidity)
12 Local Cooling Systems (Hiross) in the computing room
WN typical Rack Composition
• Power Controls (3U)• 1 network switch (1-2U)
– 48 FE copper interfaces
– 2 GE fiber uplinks
• 34-36 1U WNs– Connected to network
switch via FE
– Connected to KVM system
Remote console control
Paragon UTM8 (Raritan) 8 Analog (UTP/Fiber) output connections Supports up to 32 daisy chains of 40 nodes (UKVMSPD
modules needed) Costs: 6 KEuro + 125 Euro/server (UKVMSPD module) IP-reach (expansion to support IP transport) evaluted but not
used Autoview 2000R (Avocent)
1 Analog + 2 Digital (IP transport) output connections Supports connections up to 16 nodes
o Optional expansion to 16x8 nodes Compatible with Paragon (“gateway” to IP)
Evaluating Cyclades Alterpath KVM via serial line (cheaper)
Networking (1)
Main Network infrastructure based on optical fibres (~ 20 Km)
To ease adoption of new (High Performances) transmission technologies
To insure a better electrical insulation on long distances Local (Rack wide) links with UTP (copper) cables
LAN has a “classical” star topology GE core switch (Enterasys ER16) NEW core switch is going to be shipped (Next july)
o 120 Gbit Fiber (Scale up to 480 ports)o 12 10 Gbit Ethernet (Scale up to max 48 ports)
Farms up-link via GE trunk (Channel) to core switch Disk Servers directly connected to GE switch (mainly
fibre)
Networking (2)
WN's connected via FE to rack switch (1 switch per rack)
Not a single brand for switches (as for wn's)o 3 Extreme Summit 48 FE + 2 GE portso 3 3550 Cisco 48 FE + 2 GE portso 8 Enterasys 48 FE 2GE portso 7 switch Summit400 48 GE copper + 2 GE ports + (2x10Gb
ready) Homogeneous characteristics
o 48 Copper Ethernet portso Support of main standards (e.g. 802.1q)o 2 Gigabit up-links (optical fibers) to core switch
CNAF interconnected to GARR-G backbone at 1 Gbps. Giga-PoP co-located 2 x 1 Gbps test links to CERN, Karlsruhe
FarmSW3(IBM)
NA
S4
FarmSWG1
SSR8600
Bo 12KGP
FarmSW1
FarmSW2(Dell)
LHCBSW1
NA
S2
NA
S3
S.Zani
FarmSW4(IBM3)Catalyst3550
FarmSW5(3Com)
DE
LL
A
XU
S
SA
N
Disk Server
F.C.
F.C.
F.C.
F.C.
FarmSW9
FarmSW12131.154.99.121
FarmSW6
FarmSW7
FarmSW8 FarmSW10
FarmSW11
FarmSWG2
ST
K
F.C.
1st Floor
Internal services
T1
Babar SW
NA
S1
Network Configuration
Info
rtrend
F.C.
L2 Configuration
Each Experiment has its own VLAN Solution adopted for complete granularity
Port based VLAN VLAN identifiers are propagated across switches
(802.1q) Avoid recabling (or physical moving) of machines to
change farm topology Level 2 isolation of farms Possibility to define multi-tag (Trunk) ports (for
servers)
Power Switches
• 2 models used at Tier1:• “Old” APC MasterSwitch
Control Unit AP9224 controlling 3x8 outlets 9222 PDU from 1 Ethernet
• “New” APC PDU Control Unit AP7951 controlling 24 outlets from 1 Ethernet
• “zero” Rack Unit (vertical mount)
• Access to the configuration/control menu via serial/telnet/web/snmp
• 1 Dedicated machine running APC Infrastruxure Manager Software (in progress)
See also: http://www.cnaf.infn.it/cnafdoc/CD0044.doc
Remote Power Distribution Unit
Screenshot of APC Infrastruxure Manager Software with the status of all TIER1 PDU
Computing units
~ 400 1U rack-mountable Intel dual processor servers
800 MHz – 2.4 GHz ~ 240 wn’s (~ 480 CPU’s) available for LCG
To be shipped June 2004: 32 1U bi-processors Pentium 2.4 GHz 350 1U bi-processors Pentium IV 3.06 GHz
o 2 x 120 GB HDso 4 GB RAM o 2159 euro each
Tendering: HPC farm with MPI
o Servers interconnected via Infiniband Opteron farm (near future)
o To allow experiments to test their software on AMD architecture
Storage Resources
~50 TB RAW Disk Space ON LINE. NAS
o NAS1+NAS4 (3Ware low cost) Tot 4.2 TBo NAS2+NAS3 (Procom) Tot 13.2 TB
SAN o Dell Powervault 660f Tot 7 TBo Axus (Brownie) Tot 2 TBo STK Bladestore Tot 9 TBo Infortrend ES A16F-R Tot 12 TBo IBM Fast-T 900 (in few weeks) Tot 150 TB
See also:http://www.lnf.infn.it/sis/preprint/pdf
/INFN-TC-03-19.pdf
STORAGE resource
CLIENT SIDE
WAN or TIER1 LAN
PROCOM NAS2Nas2.cnaf.infn.it8100 GbyteVIRGO ATLAS
Fileserver CMS diskserv-cms-1
PROCOM NAS3Nas3.cnaf.infn.it4700 GbyteALICE ATLAS
IDE NAS1,NAS4Nas4.cnaf.infn.it1800+2000 GbyteCDF LHCB
AXUS BROWIECirca 2200 GByte 2 FC interface
DELL POWERVAULT7100 GByte2 FC interface
FAIL-OVERsupport
Gadzoox SlingshotFC Switch 18 port
RAIDTEC1800 Gbyte2 SCSI interfaces
CASTORServer+staging
STK180 with 100 LTO (10Tbyte Native)
Fileserver Fcds2
Alias diskserv-ams-1 diskserv-atlas-1
STK BladeStoreCirca 10000 GByte 4 FC interface
STK L5500 robot (max 5000) 6 LTO-2
InfortrendES A16F-R12 TB
Storage management and access (1)
Tier1 storage resources accessible as classical storage or via grid
Non grid disk storage accessible via NFS Generic WN’s also have AFS client NFS mount volumes configured via autofs and
ldap unique configuration repository eases maintenance in progress: integration of ldap configuration with Tier1
db data Scalability issues with NFS
Experienced stalled mount points Recent nfs versions use synchronous export: needed to
revert to async and use reduced rsize and wsize to avoid huge amount of retransmissions
Storage management and access (2)
Part of disk storage used as front-end to CASTOR Balance between disk and CASTOR according to
experiments needs 1 stager for each experiment (installation in
progress) CASTOR accessible both directly or via grid
CASTOR SE available ALICE Data Challenge used CASTOR architecture
Feedback to CASTOR team Need optimization for file restaging
Tier1 Database
Resource database and management interface Postgres database as back end Web interface (apache+mod_ssl+php) Hw servers characteristics Sw servers configuration Servers allocation
Possible direct access to db for some applications Monitoring system Nagios
Interface to configure switches and interoperate with installation system.
Vlan tags dns dhcp
Installation issues
Centralized installation systemLCFG (EDG WP4)Integration with a central Tier1 db Moving from a farm to another implies just changes in IP address (not name)
Unique dhcp server for all VLANsSupport for DDNS (cr.cnaf.infn.it)
Investigating Quattor for future needs
Our Desired Solution for Resource Access
SHARED RESOURCES among all experiments Priorities and reservations managed by the scheduler
Most of Tier1 computing machines installed as LCG Worker Nodes, with light modifications to support more VOs
Application Software not directly installed on WNs but accessed from outside (NFS, AFS, …)
One or more Resource Manager to manage all the WNs in a centralized way
Standard way to access Storage for each application