- Career growth opportunities
- Work from home opportunities based in Japan
About Our Client
Our client is an established e-commerce giant. As part of their expansion plan, they are seeking a system administrator with big data background to join their growing team. This candidate would be responsible for the data pipeline used for analyzing all services' applications and web sites. If you are a system administrator who is capable of building and maintaining several large clusters with 1000+ nodes with hadoop applications running on them, then you are a match!
- Build and maintain large-scale batch and real-time data pipelines with data processing frameworks and technologies like Spark, Flink, Kafka, Druid etc.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Improve data quality through testing, tooling and continuously evaluating performance
- Work in cross functional agile teams to continuously experiment, iterate and deliver on new product objectives.
- Work with data and analytics experts to strive for greater functionality in our data systems.
The Successful Applicant
- Minimum 1 year of experience in system administration/warehousing.At least 2 years DevOps experience with Hadoop distributions.
- Understanding basic concept of Apache Spark, Flink, Druid.
- Comfortable with setting up and maintaining CentOs systems on-premises or on the cloud.
- Willing to work end-to-end, from sourcing, processing, and publishing data for reporting and building the necessary platform to support it.
- Some knowledge in bash/python
What's on Offer
- Flexible work environment designed for WFH benefits
- Continue to grow portfolio in leading e-commerce industry
- 6M-8M per Annum based on skillset
- Candidate will be given the freedom to try other roles within the team
- Fast hiring process