Particula
Particula

AI/ML Engineer (LLMs & RAG)

Remote
Employee
Consulting, Engineering

Particula is the prime rating provider for digital assets, now bringing trusted, data-driven ratings on-chain. As DeFi matures and converges with TradFi, we’re building the rails that help institutions, protocols, and builders use ratings to unlock safer, more efficient capital flows.

About the role

You’ll help design, build, and ship LLM‑powered features that underpin our ratings and monitoring products. Working closely with the Head of AI, you’ll focus on AI‑powered token and asset analysis, automated report generation, multi‑modal document analysis, robust evaluation and observability, and reliable production delivery on AWS.

No one ticks every box. If you bring solid fundamentals, curiosity and the drive to learn quickly, please apply even if your experience doesn’t align one‑to‑one with the description. We care about potential and attitude.

Tasks

  • Build and maintain LLM‑powered features end‑to‑end (prompting, RAG pipelines, structured extraction/classification such as entity extraction).
  • Develop data ingestion, cleaning and indexing pipelines for RAG, including n8n workflows for intake and enrichment (connectors, transformations, error handling, scheduling).
  • Contribute to lightweight model tuning and systematic evaluation.
  • Establish evaluation and observability for RAG (dashboards, automated reporting, experiment tracking) to ensure reliability and factual grounding.
  • Optimise prompts, retrieval and context strategies to improve accuracy, reduce hallucinations and control latency/cost.
  • Work hand‑in‑hand with our ML/DevOps engineer to ensure smooth deployments, reliability and continuous improvement.
  • Coordinate and provide technical guidance to a small offshore AI development team (clear specifications, code reviews, quality standards), with support from the Head of AI.
  • Collaborate with product and engineering to scope and deliver incremental value in short, iterative releases.

Requirements

  • Strong Python skills; experience with PyTorch or Transformers; familiarity with the Hugging Face ecosystem.
  • Practical knowledge of LLM tooling (e.g. LangChain or LangGraph) and RAG concepts.
  • Experience building on AWS, ideally including:
    • Serverless functions (AWS Lambda) for orchestration,
    • Elastic compute (EC2) for workloads,
    • Foundation model services (Bedrock or SageMaker) for model hosting and tuning.
  • Hands‑on with n8n; workflow automation experience is a plus.
  • Containerisation with Docker; proficiency with Git and CI/CD.
  • MLOps fundamentals: MLFlow for experiment/model tracking; evaluation frameworks (e.g. RAGAS).
  • Clear communication, collaborative mindset and focus on shipping.
  • Languages: strong English; German is a plus. Applications in English or German are welcome.
  • Education: Degree in Computer Science or equivalent preferred; equivalent practical experience acceptable.

Benefits

  • Offsites with the team in exciting locations
  • Flexible working hours in a remote‑first company
  • Exciting product in a very dynamic market environment
  • Values‑based start‑up culture
  • Many opportunities to develop further and network with committed people
  • Flat hierarchy

Let’s build the next layer of trust for digital assets - together!

Updated: 20 seconds ago
Job ID: 15361413
Report issue

Particula

11-50 employees
Technology, Information and Internet

𝐓𝐡𝐞 𝐩𝐫𝐢𝐦𝐞 𝐫𝐚𝐭𝐢𝐧𝐠 𝐩𝐫𝐨𝐯𝐢𝐝𝐞𝐫 𝐟𝐨𝐫 𝐝𝐢𝐠𝐢𝐭𝐚𝐥 𝐚𝐬𝐬𝐞𝐭𝐬.
Particula is the prime rating provider for digital assets, transforming on- and off-chain dat…

Read more
  1. AI/ML Engineer (LLMs & RAG)