Project Overview -
We are seeking experienced linguists for an evaluation & annotation project which is launching immediately to support the quality evaluation of LLM and MT generated translations. The primary objective of this project is to build a high-quality golden dataset to support the development and optimization of an AI agent.
What you will do -
- Review and annotate up to 230 text segments per locale
- Evaluate translations for accuracy, fluency, and correctness
- Add optional qualitative notes where relevant to support evaluation outcomes
- Identify, track, and document patterns, recurring errors, and linguistic insights to improve overall dataset quality
Requirements -
- Native-level proficiency in French
- Prior experience in linguistic quality assurance, annotation, or evaluation is strongly preferred
- Strong attention to detail, analytical thinking, and consistency in classification and judgment
- Ability to follow detailed guidelines and meet project timelines
Project Details -
Contract Type: Freelance
Location: Canada (remote)
Duration: 1 to 2 weeks
Schedule: 10 hours weekly; flexible based on client's needs