Join the Revenue Science Revolution

We’re empowering non-analysts to focus on boosting revenue performance and delivering business value with deep insights—without worrying about coding, data silos, legacy data structures, data science, or engineering.

Our culture is guided by respect, transparency, collaboration, and results.

We’re really excited to push the boundaries of not only our marketing technology but also of the industry!

Sounds like something you want to be a part of? Join our team in Israel and help our growing customers succeed!​


Data Platform Team Lead

Herzliya, Israel (Hybrid) · Full-time

About The Position

The Opportunity: 

Dealtale, a VIANAI company, is a next-generation platform for driving breakthrough revenue opportunities across marketing, sales, and product teams. Revenue Science leverages causal machine learning methods to enable revenue teams to see beyond predictions and use data to identify prescriptive actions that drive profitable growth.

We are hiring a talented Backend Development Team Lead to join our team and drive our development and delivery of our new Revenue Science platform. This position is a key part of our future growth and we are looking for someone who can hit the ground running, with a can-do attitude and a strong collaboration skills.


  • Lead and manage a team of Data Engineers, mentor and guide them to create meaningful and actionable data pipelines
  • Work with data and analytics experts, striving to improve our data systems functionality.
  • Work closely with Product team to define, measure and identify opportunities for product improvements.
  • Drive new initiatives to elevate the professional level and quality of data solutions and products.
  • Take an active part in our leadership team, consisting of Product, R&D, UX and Marketing leaders.
  • Lead the data strategy: be responsible for planning, prioritizing and building the right data solutions to support business needs.
  • Recruit and train new Data Engineers.
  • Create and maintain optimal data pipeline architecture.
  • Assemble large, complex data sets that meet both functional & non-functional business requirements
  • Identify, design, and implement improvements in internal processes: Manual processes automation, data delivery optimizations, infrastructure re-designs for greater scalability, etc.
  • Use SQL and big data cloud technologies to build the infrastructure required for optimal loading, extraction, and transformation of data from a wide variety of sources
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
  • Work with Fullstack team to assist with data-related technical issues and support their needs associated with data infrastructure



  • 3+ years of experience as a Data Engineer
  • 3+ years of experience with Python
  • Ability to take ownership and facilitate consensus among a diverse group of stakeholders
  • 3+ years of experience with data warehousing technologies such as Amazon Redshift, Snowflake, Redis, MongoDB and experience with BI solutions development (DWH, ETL)
  • 3+ years of experience in SQL and ETL pipelines
  • Experience with data platforms architecture, in particular, Big Data architecture , Deep understanding of big data storage formats, especially Apache Parquet, Snappy
  • Experience with AWS cloud services like S3 , EMR , Athena, Lambda, Kinesis, Glue, etc.
  • Experience with Cython, numpy, pandas, koalas, pyarrow, fastparquet, scipy, celery,, django

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
  • Experience in leading dev teams
  • Willing to collaborate and problem solve in an open team environment.
  • Flexible and adaptable in regard to learning and understanding new technologies.
  • B.Sc. or higher degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field


  • 3+ years of experience in open-source Big Data tools like Hadoop, Hive, Spark, Presto, Sqoop
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Experience with Docker and Kubernetes, Fargate, ECS and CI/CD based on Jenkins
  • Understanding of cloud security best practices

Apply for this position

Get onboard and experience the Dealtale difference!

Schedule your Demo today

Make the best career decision ever

Get onboard and experience the Dealtale difference!

Schedule your Demo today

Make the best career decision ever