Job Description

  • Design, implement, and maintain scalable data pipelines handling billions of records.
  • Maintain big data systems and ensure their reliability and maintenance.
  • Propose new data architecture for new requirements and fine-tune the existing ones.
  • Work with business intelligence team, ventures, and data scientists team as well as other teams, and meet their requirements.
  • Monitor data services and resolve issues in case of any incident.

Requirements

  • Knows how to write highly efficient data pipelines and troubleshoot them.
  • Experience working with Airflow.
  • Hands-on experience in Git, and GitLab.
  • Specialized in Hadoop ecosystem (HDFS, Hive, Spark, and DeltaLake).
  • At least 2 years of programming experience in Python.
  • Experience working with Relational Databases (Oracle, and MySQL).
  • SQL knowledge of database systems.
  • Good communication and teamwork skills.
  • Hands-on experience in Linux.

To see more jobs that fit your career

Salary Estimator

Discover your current worth in the job market.