Job Summary:
We are seeking a solid Hadoop Operations Engineer focused on operations to administer/scale our multipetabyte Hadoop clusters and the related services that go with it. This role focuses primarily on provisioning, ongoing capacity planning, monitoring, management of Hadoop platform and application/middleware that run on Hadoop. (an onsite role in Malmö).
Job Description:
- Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
- Strong development/automation skills. Must be very comfortable with reading and writing
Python and Java code.
- Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to
large clusters.
- Tools-first mindset. You build tools for yourself and others to increase efficiency and to make
hard or repetitive tasks easy and quick.
- Experience with Configuration Management and automation.
- Organized, focused on building, improving, resolving and delivering.
- Good communicator in and across teams, taking the lead.
Education:
Bachelors or Master Degree in Computer Science or similar technical degree.
- Responsible for maintaining and scaling production Hadoop, HBase, Kafka, and Spark clusters.
- Responsible for the implementation and ongoing administration of Hadoop infrastructure including monitoring, tuning and troubleshooting.
- Provide hardware architectural guidance, plan and estimate cluster capacity, Hadoop cluster deployment.
- Improve scalability, service reliability, capacity, and performance.
- Triage production issues when they occur with other operational teams.
- Conduct ongoing maintenance across our large scale deployments.
- Write automation code for managing large Big Data clusters
- Work with development and QA teams to design Ingestion Pipelines, Integration APIs, and provide Hadoop ecosystem services
- Participate in the occasional on-call rotation supporting the infrastructure.
- Hands on to troubleshoot incidents, formulate theories and test hypothesis, and narrow down possibilities to find the root cause.
Competence demands:
- Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
- Strong development/automation skills. Must be very comfortable with reading and writing Python and Java code.
- Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to large clusters.
- Tools-first mindset. You build tools for yourself and others to increase efficiency and to make hard or repetitive tasks easy and quick.
- Experience with Configuration Management and automation.
Start: as soon as found
Duration: long term assignment
Work location: Malmö area, Sweden
Requirements: Min. 5 years of professional IT experience.
Job type: Freelance