Difference between revisions of "HvABigDataVisualisation"

From PDP/Grid Wiki
Jump to navigationJump to search
(Created page with "Welcome to Visualising Real Big Data at Nikhef! Nikhef, the Dutch national institute for sub-atomic physics, and the Dutch National e-Infrastructure form part of a large, glob...")
 
 
(9 intermediate revisions by the same user not shown)
Line 2: Line 2:
  
 
This data is used not only here, but by over 20000 researchers worldwide, who transfer the data back and forth between the data curation centres, the more than 300 compute centres, and thoudans of small analysis workstations and desktops. Data flows globally, and even just at Nikhef compute and disk clusters are interconnected by 240 gigabit-per-second links, and international connectivity exceeds 100 gigabit-per-second.  
 
This data is used not only here, but by over 20000 researchers worldwide, who transfer the data back and forth between the data curation centres, the more than 300 compute centres, and thoudans of small analysis workstations and desktops. Data flows globally, and even just at Nikhef compute and disk clusters are interconnected by 240 gigabit-per-second links, and international connectivity exceeds 100 gigabit-per-second.  
 +
 +
= More questions than answers? =
 +
 +
The Phase-II visualisation challenge leaves you with plenty of things to try out. Use your creativity to visualise, explain and analyse the data: big data lives by propaganda (and agitation)!
 +
 +
* How can global data flows be presented?
 +
* Can one conceive visualisations for the general public? for users? or for both?
 +
* Identifying troublesome inter-peer links (and local disk servers) via analytics techniques
 +
 +
 +
 +
* Which systems (or group of systems) uses the most bandwidth (in and out separately)
 +
* Which systems (or group of systems) generates the most connections?
 +
 +
* What does the time distribution of transfers look like (bandwidth and also number)?
 +
* What "funny behavior" is there (machine learning anomaly detection)
 +
 +
but there's surely more to do with the data you have!
  
 
= About the data analytics cluster =
 
= About the data analytics cluster =
 
   
 
   
 +
[[File:RTM-2D-busy-2007.jpg|200px|thumb|right|Jobs distribution across EGEE and WLCG]]
  
  
Line 15: Line 34:
 
The TI student team in 2015 set up the ELK cluster and has taken care of a continuous stream of fresh data being inserted in the database. At any time there should be about 1 month worth of historical detailed data in the database, and it is updated in near-real-time.  
 
The TI student team in 2015 set up the ELK cluster and has taken care of a continuous stream of fresh data being inserted in the database. At any time there should be about 1 month worth of historical detailed data in the database, and it is updated in near-real-time.  
  
An ElasticSearch (ES) interface and API is accessible (through an application proxy) on a dedicated system: "vm6.stud1.ipmi.nikhef.nl" (172.23.1.16) on port 80/tcp.  
+
An ElasticSearch (ES) interface and API is accessible (through an application proxy) on a dedicated system: "vm6.stud1.ipmi.nikhef.nl" (172.23.1.16) on port 9200/tcp.  
 
Details of the API and how to use the interface (alongside some example queries) are available in your course documentation pack, kindly prepared by Jouke, Olivier, and Rens. We expect clients to talk to the ElasticSearch API via the network. The client should run on your local system (own laptop, desktop, HvA VDI system, &c), and
 
Details of the API and how to use the interface (alongside some example queries) are available in your course documentation pack, kindly prepared by Jouke, Olivier, and Rens. We expect clients to talk to the ElasticSearch API via the network. The client should run on your local system (own laptop, desktop, HvA VDI system, &c), and
 
connect to the system via the public API only.
 
connect to the system via the public API only.
Line 29: Line 48:
 
= Responsibilities and Acceptable Use =
 
= Responsibilities and Acceptable Use =
  
 +
During this course you will work with logging and auditing data collected by Nikhef in the course of its normal operations. Although the data does not contain sensitive personal information, it is still private and must not be reproduced, copied, or redistributed by any means beyond the visualisation of results on your own personal or HvA systems. Please keep the private data confidential, and do not make bulk downloads of search results on your laptop (it's too large anyway ;-)
 +
Visualisations or representations that do not reveal the individuals involved in a data flow are OK to show widely, of course!
 +
 +
While using Nikhef systems, network, APIs and services, we expect you to abide by the [https://www.nikhef.nl/aup/ Nikhef Acceptable Use Policy]. If you find flaws in the system, send a mail to [mailto:security@nikhef.nl security@nikhef.nl] so we can follow-up. If you just need help, send a mail to [mailto:pdp-proj-hva-bigdata@nikhef.nl pdp-proj-hva-bigdata@nikhef.nl].
  
 
= OpenVPN client config example =
 
= OpenVPN client config example =
Line 45: Line 68:
 
  comp-lzo
 
  comp-lzo
 
  verb 3
 
  verb 3
 +
cipher aes-256-cbc
 
  #auth-user-pass ../keys/secret-schaapscheerder.conf
 
  #auth-user-pass ../keys/secret-schaapscheerder.conf
  
Line 72: Line 96:
 
  -----END CERTIFICATE-----
 
  -----END CERTIFICATE-----
  
and if you want auto-login, *only on your own laptop*, make a secure, protected file "auth-user-pass ../keys/secret-schaapscheerder.conf" and uncomment the auth-user-pass line above. The file "auth-user-pass ../keys/ipvanish.conf" should contain something like
+
and if you want auto-login, *only on your own laptop*, make a secure, protected file "auth-user-pass ../keys/secret-schaapscheerder.conf" and uncomment the auth-user-pass line above. The file "auth-user-pass ../keys/secret-schaapscheerder.conf" should contain something like
  
 
  nvahva16x2342
 
  nvahva16x2342

Latest revision as of 14:28, 7 September 2016