AI Data Engineer

Department
AI
Type
Full-time
Level
Location
Netanya
About The Position

Fetcherr is revolutionizing industries through the power of deep learning and predictive AI. At the heart of our innovation is the Large Market Model (LMM), a sophisticated, adaptable AI engine that forecasts market dynamics with unparalleled precision. We empower our partners with real-time, data-driven intelligence to navigate complex markets. Join us to build the future of dynamic, AI-driven commerce.

We are seeking an exceptional AI Data Engineer to take a pivotal role in the development and operational excellence of our Large Market Model (LMM). You will be a hands-on builder and owner of the AI systems central to our company's success. You will architect and engineer the data and modeling pipelines that transform raw information into predictive intelligence, applying rigorous software development practices to the entire machine learning lifecycle.  

What You'll Do (Key Responsibilities):

  • Architect AI-Ready Datasets: Design, build, and orchestrate robust, scalable data pipelines that ingest and integrate diverse datasets to fuel our LMM.  
  • Master Feature Engineering: Go beyond data cleaning to conceptualize and implement sophisticated features, collaborating with domain experts to translate market dynamics into powerful predictive signals.  
  • Ensure System Reliability and Quality: Implement a comprehensive testing strategy for our AI systems, including data validation and model evaluation. Own the operational health of our production models by building robust monitoring for performance and data drift.  
  • Drive Innovation: Continuously research and experiment with new techniques in feature engineering, MLOps, and model optimization to keep our LMM at the cutting edge.  
About The Position

What You'll Bring (Qualifications):

  • A proactive, ownership-driven mindset with excellent problem-solving and communication skills.  
  • A B.Sc. or Master's degree in Computer Science, Engineering, Mathematics, or a related quantitative field.  
  • A minimum of 2 years of commercial experience building production-level AI data pipelines systems
  • Expert-level proficiency in Python and its data science/machine learning ecosystem (e.g., Pandas, Scikit-learn).  
  • Proven experience designing and maintaining scalable data processing pipelines using distributed computing frameworks (Dask, Spark etc.).  
  • Hands-on experience with MLOps principles and tools, including CI/CD for machine learning and pipeline orchestrators (e.g., Dagster, Airflow).  
  • Working experience with SQL-based ETL pipelines to extract, transform, and load large-scale datasets from diverse sources into structured formats supporting AI and analytics workflows.
  • Strong software engineering fundamentals, including experience with containerization (Docker), cloud platforms (AWS, GCP, or Azure), and API development.

Bonus Points:

  • Experience in the aviation or a related high-frequency, dynamic pricing industry.
  • Contributions to open-source projects in the data or ML space.

Application Form

Fill out the form to apply