About Fetcherr
Fetcherr specializes in deep learning, algorithmic forecasting, and dynamic markets. The company is building a new generation of LLM-powered analytical systems that will transform how businesses interact with data.
As the Tech Lead for LLM Systems, you will architect and build Fetcherr’s LLM-based virtual analyst platform from the ground up. This is a highly technical, hands-on role focused on LLM architecture, RAG pipelines, prompt engineering, vector databases, and reliability infrastructure.
You’ll work with product, data, and engineering teams to design a powerful conversational analytics engine built on state-of-the-art LLM technologies.
Responsibilities
Architecture & System Design
- Architect end-to-end systems that combine:
- LLMs (commercial + open-source)
- Vector databases & embeddings
- Retrieval-augmented generation (RAG)
- Natural-language-to-SQL/query generation
- Visualization and analytics libraries
- Define the technical roadmap and system architecture for the LLM platform.
Hands-On Technical Execution
- Build prototypes, run experiments, evaluate LLM providers (OpenAI, Claude, Gemini).
- Develop and optimize:
- Prompt engineering & chaining
- RAG pipelines
- Query translation engines
- Auto-generated dashboards and visual outputs
- Ensure scalability, observability, security, and cost efficiency.
LLM Expertise & Innovation
- Lead research and experimentation for model selection, fine-tuning, context injection, and multi-agent tooling.
- Evaluate open-source vs. commercial LLMs and architect hybrid solutions.
- Define evaluation frameworks for accuracy, hallucinations, latency, and safety.
Organizational LLM productivity evangelist
- Lead the organizational adoption of AI tools for improving productivity and quality
Cross-Functional Technical Leadership
- Work closely with product managers and customers to shape use cases.
- Provide technical mentorship to LLM engineers, backend developers, and data engineers.
- Be the internal expert and advocate for LLM technologies across Fetcherr.