Druva

Senior Staff Software Engineer (Foundation)

Pune, Maharashtra, India Full Time

About Druva

You won’t just join a company at Druva, you’ll help shape the future of data security at the moment it matters most. We are building a modern standard with our cloud-native solutions, designed to simplify the toughest challenges in cyber resilience for our customers. As the pioneer and market leader in fully managed SaaS data protection, we help organizations secure and recover their data from ransomware, cyberattacks, and operational disruptions without the complexity, cost, or risk of legacy infrastructure.

Our momentum is backed by the market: Druva was named a Leader in the 2025 Gartner® Magic Quadrant™ for Backup and Data Protection Platforms, a Leader in the 2025 IDC MarketScape for Cyber-Recovery, and a Leader & Outperformer in the 2025 GigaOm Cloud Data Protection Radar. Even better, customers validate that leadership every day through strong Gartner Peer Insights ratings, standout Net Promoter Scores (NPS), and top willingness-to-recommend results.

Visit druva.com and follow us on LinkedInX and Facebook.

About Role :  

The Foundation team at Druva is responsible for designing a highly performant and scalable cloud file system on the Druva cloud in AWS. To build this petabyte-scale, distributed, services-oriented cloud file system, various key concepts like file system metadata, versioning, and eventual consistency are used along with leveraging various AWS services like S3, DDB, and Kinesis. While the core file storage engine provides the backup storage for all the Druva products, allied components like the indexing engine, key-value store, and big data pipeline provide scalable search, analytics, and compliance services. The team diligently keeps track of newer services, storage tiers, and various aspects of existing AWS services to take advantage of the continuous evolution of services and use them effectively in the background.

We are looking for a Senior Staff Software Engineer who is passionate about building highly scalable, secure, and performant infrastructure components that form the core of our data protection and data management platform. This role is ideal for someone with a deep understanding of systems programming, distributed storage, and cloud-native architecture and is looking to solve complex technical problems at scale.

We prefer candidates from Tier-1 institutes (IITs, NITs, BITS Pilani, IIIT-H, IISc) or those who have demonstrated exceptional systems-level depth through impactful work in high-scale backend systems, infrastructure platforms, or storage/security products.

Key Responsibilities

  • Own the architecture, high-level and low-level design of data protection and data management services and frameworks.
  • Design and implement secure, resilient, and highly scalable microservices using Python or Golang, following SaaS-first principles.
  • Collaborate with architects, product managers, DevOps, and peer engineering teams to build storage and data services that manage data and metadata at scale.
  • Continuously evaluate and integrate emerging technologies and tools to refine existing platforms and enhance product capabilities.Drive the adoption of best practices in system design, observability, testing, and CI/CD pipelines for high-quality releases.
  • Mentor and guide junior team members in systems design, data protection principles, and high-velocity product development.
  • Stay hands-on and contribute actively to feature delivery, incident handling, performance tuning, and code reviews.

Must-Have Skills

  • AI first mindset to software development, having experience using genAI during various phases of software development lifecycle from design to code to test using tools like 'cursor'
  • 5 -7 years of experience, preferably in a product company, building global scale distributed SaaS applications that handle petabytes of data.
  • Expertise in Python or Golang with a focus on scalable, performant systems.
  • Strong experience in cloud-native storage systems, metadata management, or distributed data pipelines.
  • Hands-on experience building storage, backup, archival, or data protection products is highly desirable.
  • Deep knowledge of cloud platforms like AWS or Azure and container orchestration using Kubernetes/Docker.
  • Experience with event-driven architecture, message queues (Kafka/RabbitMQ), and gRPC/REST APIs.
  • Familiarity with observability tools like Prometheus, Grafana, ELK, or Datadog.
  • Solid understanding of system performance, multi-threading, and concurrency control.

Desirable Skills

  • Prior experience with data security frameworks, encryption, key management, or compliance-focused features.
  • Exposure to CI/CD tools like GitLab CI, CircleCI, or Jenkins.
  • Agile development experience (Scrum/Kanban).
  • Strong problem-solving, system debugging, and communication skills.

Ideal Candidate Profile

  • Comes from a Tier-1 or Tier-2 engineering institute or has demonstrated deep backend systems expertise.
  • Has built or worked on platform-level components used by multiple engineering teams.
  • Enjoys tackling low-level system problems, scaling challenges, and performance bottlenecks.
  • Has a product mindset, and collaborates well across teams to align tech design with business outcomes

Qualification: 

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field.