#10439 | 2019-09-30 Malmö area, Sweden

Hadoop Operations Engineer (Hadoop, Spark) (M/W)

Job Summary:
We are seeking a solid Hadoop Operations Engineer focused on operations to administer/scale our multipetabyte Hadoop clusters and the related services that go with it. This role focuses primarily on provisioning, ongoing capacity planning, monitoring, management of Hadoop platform and application/middleware that run on Hadoop. (an onsite role in Malmö).

Job Description:
  • Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
  • Strong development/automation skills. Must be very comfortable with reading and writing
Python and Java code.
  • Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to
large clusters.
  • Tools-first mindset. You build tools for yourself and others to increase efficiency and to make
hard or repetitive tasks easy and quick.
  • Experience with Configuration Management and automation.
  • Organized, focused on building, improving, resolving and delivering.
  • Good communicator in and across teams, taking the lead.
Education:
Bachelors or Master Degree in Computer Science or similar technical degree.

  • Responsible for maintaining and scaling production Hadoop, HBase, Kafka, and Spark clusters.
  • Responsible for the implementation and ongoing administration of Hadoop infrastructure including monitoring, tuning and troubleshooting.
  • Provide hardware architectural guidance, plan and estimate cluster capacity, Hadoop cluster deployment.
  • Improve scalability, service reliability, capacity, and performance.
  • Triage production issues when they occur with other operational teams.
  • Conduct ongoing maintenance across our large scale deployments.
  • Write automation code for managing large Big Data clusters
  • Work with development and QA teams to design Ingestion Pipelines, Integration APIs, and provide Hadoop ecosystem services
  • Participate in the occasional on-call rotation supporting the infrastructure.
  • Hands on to troubleshoot incidents, formulate theories and test hypothesis, and narrow down possibilities to find the root cause.

Competence demands:
  • Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
  • Strong development/automation skills. Must be very comfortable with reading and writing Python and Java code.
  • Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to large clusters.
  • Tools-first mindset. You build tools for yourself and others to increase efficiency and to make hard or repetitive tasks easy and quick.
  • Experience with Configuration Management and automation.


Start: as soon as found
Duration: long term assignment
Work location: Malmö area, Sweden
Requirements: Min. 5 years of professional IT experience.
Job type: Freelance

Das Projekt ist nicht mehr vakant

Wir bedauern Ihnen mitteilen zu müssen, dass die Suche nach passenden IT-Beratern für dieses Projekt bereits abgeschlossen ist.

Bitte klicken Sie auf „Verfügbare Projekte“, um alle vakanten Projekte einsehen zu können.

Sie sind Kunde und auf der Suche nach einem ähnlichen Profil? Bitte nutzen Sie das Formular „Beraterprofil ansehen“ auf unserer Internetseite für Kunden.

ProData Consult uses own and third-party cookies for the purpose statistics, preferences and optimization of the website. Third parties may use cookies for targeted advertising. By accepting you give us your consent to the use of cookies. Read our Cookie Policy for more information. You can always withdraw your consent here: Impressum, Datenschutz & Cookies

The website requires the use of ‘Necessary cookies’. Our necessary cookies are only used for the purpose of deliver a proper functioning website and webservice.

Selected third parties may serve cookies on your computer or mobile device to allow relevant adverts to be delivered to you on third party websites.