We’re determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals – and to help others accomplish theirs, too. Join our team as we help shape the future.
Key Responsibilities
Data and AI Engineer responsible for Implementing AI data pipelines that bring together structured, semi-structured and unstructured data to support AI and Agentic solutions. This Includes pre-processing with extraction, chunking, embedding and grounding strategies to get the data ready.
Real-Time Data Streaming: Build and maintain scalable and robust real-time data streaming pipelines using technologies such as Apache Kafka, AWS Kinesis, Spark streaming, or similar.
Develop AI-driven systems to improve data capabilities
Implement efficient Retrieval-Augmented Generation (RAG) architectures and integrate with enterprise data infrastructure.
Develop data domains and data products for various consumption archetypes including Reporting, Data Science, AI/ML, Analytics etc.
Ensure the reliability, availability, and scalability of data pipelines and systems through effective monitoring, alerting, and incident management.
Model domain entities, relationships, and business logic in knowledge graphs (e.g., Neo4j, Amazon Neptune, RDF).
Implement scalable semantic layer with dynamic query translation to deliver real time insights for conversational analytics.
Collaborate closely with DevOps and infrastructure teams to ensure seamless deployment, operation, and maintenance of data systems.
Develop graph database solution for complex data relationships supporting AI systems.
Required Skills & Experience:
Bachelor's degree in Computer Science, Artificial Intelligence, or related field.
3+ years of data engineering experience including Data solutions, SQL and NoSQL, Snowflake, ETL/ELT tools, CICD, Bigdata, Cloud Technologies (AWS/Google/AZURE), Python/Spark, Datamesh, Datalake or Data Fabric.
2+ years’ experience with cloud platforms (AWS, GCP, or Azure)
1+ years of data engineering experience focused on supporting AI technologies.
1+ years of hands-on experience implementing AI data solutions.
1+ years' experience with prompt engineering techniques for large language models.
1+ years of implementing Retrieval-Augmented Generation (RAG) pipelines, integrating retrieval mechanisms with language models.
1+ years of implementing AI driven data systems supporting agentic solution (AWS Lambda, S3, EC2, Langchain, Langgraph).
2+ years programming skills in Python and familiarity with deep learning frameworks such as PyTorch or TensorFlow.
1+ years with building AI pipelines that bring together structured, semi-structured and unstructured data.
1+ years in vector databases, graph databases, NoSQL, Document DBs, including design, implementation, and optimization. (e.g., AWS open search, GCP Vertex AI, Neo4j, Spanner Graph, Neptune, Mongo, DynamoDB etc.).
Strong written and verbal communication skills
Able to communicate effectively with technical teams
Team player who collaborates effectively across teams
Strong organization and execution skills.
Strong interpersonal and time management skills
Ability to work successfully in a lean, agile, and fast-paced organization, leveraging Agile principles and ways of working.
Ability to translate technical topics into business solutions and strategies