The Role
We are seeking an experienced Principal Software Engineer (Node.js) to drive the technical vision and execution of our Data Discovery and Removal Automation Team, responsible for developing automation workflows and web crawlers at scale. This is a high-impact role that requires deep technical expertise in Node.js and the ability to architect and optimize high-scale automation solutions.
As a Principal Engineer and Individual Contributor, you will work closely with leadership to define strategy, own architecture, set technical direction, and ensure technical excellence. You will be responsible for designing, implementing, and maintaining scalable, secure, and efficient automation solutions. Additionally, you will have the opportunity to explore and integrate generative AI technologies to enhance automation workflows, enabling greater scalability and efficiency.
This is a fully remote position, based in the United States.
Responsibilities
- Architect and develop scalable automation systems, web crawlers, and data-processing pipelines.
- Provide technical direction to the Automation Team, setting best practices for development, security, and performance.
- Collaborate with product and engineering teams to define roadmap and technical direction.
- Perform code reviews, mentor junior engineers, and foster a culture of high-quality software development.
- Optimize existing systems for performance, reliability, and scalability.
- Stay up to date on cutting-edge web automation and security trends.
- Lead troubleshooting efforts for complex technical issues and system failures.
Required Qualifications
- 8+ years of experience as a software engineer, with at least 3+ years of hands-on experience in Node.js.
- 2+ years of experience in a principal software engineer role, guiding architecture decisions.
- Expertise in building web crawlers, scrapers, and automation tools.
- Strong experience with Puppeteer for web automation and scraping.
- Strong experience with asynchronous programming, event-driven architecture, and message brokers (e.g., RabbitMQ, Kafka).
- Proficiency with databases such as PostgreSQL, SQL, and NoSQL technologies (e.g., Redis, Elasticsearch, MongoDB).
- Hands-on experience with Kubernetes and cloud services (AWS, GCP, or Azure).
- Deep understanding of security best practices for handling sensitive user data.
- Experience designing scalable and distributed systems.
- Strong problem-solving skills and ability to work autonomously.
Bonus Qualifications
- Experience with generative AI technologies and using them to enhance workflow automation.
- Experience in the consumer privacy or consumer data industries.
- Familiarity with Python and Django.
Location
This is a remote-first position, based in the United States. Optery is headquartered in the San Francisco Bay Area, but operates as a fully remote global team.
Compensation & Benefits
- $150K - $195K
- Competitive equity
- Health, dental, and vision insurance
- 401k program with employer match
- Paid time off policy
- Stipend for home office setup
Equal Opportunity
Optery values diversity and is an equal opportunity employer. Optery does not discriminate on the basis of race, color, religion, sex (including pregnancy and gender identity), national origin, political affiliation, sexual orientation, marital status, disability, genetic information, age, membership in an employee organization, retaliation, parental status, military service, or other non-merit factor.
🚀 Y Combinator Company Info
Y Combinator Batch: W22
Team Size: 55 employees
Industry: B2B Software and Services -> Security
Company Description: Opt out software that removes your private info from the internet
💰 Compensation
Salary Range: $150,000 - $195,000
📋 Job Details
Job Type: Full-time
Experience Level: 11+ years
Engineering Type: Backend
🛠️ Required Skills
Node.js Django Kubernetes MongoDB Python RabbitMQ Redis Kafka SQL Elasticsearch PostgreSQL