Data Engineer

This is an exciting opportunity to join a growing VC-backed startup in the Big Data tech space. We are looking for a senior Data Engineer to join our team! Get ready to work with everything connected to data: Expanding and optimizing our data pipeline architecture as well as data flows, building and improving our data systems.

What's the Job?

  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet both functional & non-functional business requirements
  • Identify, design, and implement improvements in internal processes: Manual processes automation, data delivery optimizations, infrastructure re-designs for greater scalability, etc.
  • Use SQL and big data cloud technologies to build the infrastructure required for optimal loading, extraction, and transformation of data from a wide variety of sources
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
  • Work with Fullstack team to assist with data-related technical issues and support their needs associated with data infrastructure 
  • Work with data and analytics experts, striving to improve our data systems functionality.



  • 3+ years of experience as a Data Engineer
  • 3+ years of experience with Python
  • 3+ years of experience with data warehousing technologies such as Amazon Redshift, Snowflake, Redis, MongoDB and experience with BI solutions development (DWH, ETL)
  • 3+ years of experience in SQL and ETL pipelines
  • 3+ years of experience in open-source Big Data tools like Hadoop, Hive, Spark, Presto, Sqoop
  • Experience with data platforms architecture, in particular, Big Data architecture , Deep understanding of big data storage formats, especially Apache Parquet, Snappy
  • Experience with AWS cloud services like S3 , EMR , Athena, Lambda, Kinesis, Glue, etc.
  • Experience with Docker and Kubernetes, Fargate, ECS and CI/CD based on Jenkins
  • Experience with Cython, numpy, pandas, koalas, pyarrow, fastparquet, scipy, celery,, django
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
  • Willing to collaborate and problem solve in an open team environment.
  • Flexible and adaptable in regard to learning and understanding new technologies.
  • B.Sc. or higher degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field


  • Experience with Linux administration
  • Experience with software architecture
  • Understanding of cloud security best practices
  • Experience with agile development (Scrum, Kanban)
  • Experience supporting and working with cross-functional teams in a dynamic environment
  • Ability to take ownership and facilitate consensus among a diverse group of stakeholders

Seniority Level

Mid-Senior Level


Internet, Online Media, Marketing & Advertising



How to Apply

Interested? Send us an email with your introduction and CV to

What to Expect

We will review your application and contact you for a video call interview, if selected.


You will be working from home (in the meantime) and in our Tel Aviv office.


This is a full time position, working 40 hours, spread over 5 days a week.