Data Engineer

Department
AI
Type
Full-time
Level
Location
Israel
About The Position

Fetcherr, experts in deep learning, algo, e-commerce, and digitization, is disrupting traditional systems with its cutting-edge AI technology. At its core is the Large Market Model (LMM), an adaptable AI engine that forecasts demand and market trends with precision, empowering real-time decision-making. Specializing initially in the airline industry, Fetcherr aims to revolutionize industries with dynamic AI-driven solutions.

Fetcher is seeking a Data Engineer to build large-scale optimized data pipelines using cutting-edge technology and tools. We're looking for someone with advanced Python skills and a deep understanding of memory and CPU optimization in distributed environments. This is a high-impact role with responsibilities that directly influence the company's strategic decisions and data-driven initiatives.

Key Responsibilities:

  • Build and optimize ETL/ELT workflows for analytics, ML models, and real-time systems
  • Implement data transformation using DBT, SQL, and Python
  • Work with distributed computing frameworks to process large-scale data
  • Ensure data integrity and quality across all pipelines
  • Optimize query performance in cloud-based data warehouses
  • Automate data processes using orchestration tools
  • Monitor and troubleshoot pipeline systems
About The Position

You’ll be a great fit if you have... 

  • 4+ years of hands-on experience building and maintaining production-grade data pipelines at scale
  • Expertise in Python, with strong grasp of data structures, performance optimization, and modern data processing libraries (e.g. pandas, NumPy)
  • Practical experience with distributed computing frameworks such as Dask or Spark, including performance tuning and memory management
  • Proficiency in SQL, with a deep understanding of query optimization, analytical functions, and cost-efficient query design
  • Experience designing and managing transformation logic using dbt and Dask, with a focus on modular development, testability, and scalable performance across large datasets
  • Strong understanding of ETL/ELT architecture, data modeling principles, and data validation
  • Familiarity with cloud platforms (e.g. GCP, AWS) and modern data storage formats (e.g. Parquet, BigQuery, Delta Lake)
  • Experience with CI/CD workflows, Docker, and orchestrating workloads in Kubernetes

Advantages:

  • Dagster or similar orchestration tools
  • Testing frameworks for data workflows (pytest, Great Expectations)
  • Performance optimization skills, especially for Dask/pandas
  • Cross-client solution design focusing on efficiency
  • Software architecture best practices (SOLID principles)

Application Form

Fill out the form to apply