Come work at a place where innovation and teamwork come together to support the most exciting missions in the world!
Come work at a place where innovation and teamwork come together to support the most exciting missions in the world!
This is a great opportunity to be an integral part of a team building Qualys next generation Micro-Services based technology platform processing over a Billion transactions and terabytes of data per day, leverage open source technologies, and work on challenging and business-impacting projects.
Responsibilities:
As a Principal Big Data Engineer, you will be responsible for developing a cutting-edge, scalable application which is being used by millions of customers worldwide. This position offers you an opportunity to work with world-class engineers and build resilient features to support next generation of Asset Management product in the cloud. The work involves complete ownership from a delivery perspective, including design, architecture, development, operations, and end-to-end support/maintenance of our systems. You will become the face of the team from a feature standpoint on all technical aspects. You will be responsible for guiding and mentoring other talented engineers on your team on highly complex projects and responsible for hiring new engineers/interns.
Requirements:
- Excellent programming and designing skills with 11-15+ years of hands-on experience in Java back-end development and skills with Spring-boot framework.
- Should be able to independently design moderate to a complex modules with solid code quality
- Understand data structures and algorithms
- Innovative thinker and adept at building high-performance application
- Messaging middleware using Kafka .
- Strong Java programming skills including object-oriented design, prototyping, development, testing, profiling, etc.
- Knowledge of Docker, Kubernetes, Jenkins, and related CI/CD tools
- Ability and skill to debug & solve complex issues in a high-performing environment
- 7+ years of relevant experience in design and architecture Big Data solutions using Spark
- 5+ years experience in working with engineering resources for innovation.
- 5+ years experience in understanding Big Data events flow pipeline.
- 5+ years experience in performance testing for large infrastructure.
- 5+ In depth experience in understanding various search solutions solr/elastic.
- 7+ years experience in Kafka
- In depth experience in Data lakes and related ecosystems.
- In depth experience of messing queue
- In depth experience in giving requirements to build a scalable architecture for Big data and Micro-services environments.
- In depth experience in understanding caching components or services
- Knowledge in Presto technology.
- Knowledge in Airflow.
- Hands-on experience in scripting and automation
- In depth understanding of RDBMS/NoSQL, Oracle , Cassandra , Kafka , Redis, Hadoop, lambda architecture, kappa , kappa ++ architectures with flink data streaming and rule engines
- Experience in working with ML models engineering and related deployment.
- Design and implement secure big data clusters to meet many compliances and regulatory requirements.
- Experience in leading the delivery of large-scale systems focused on managing the infrastructure layer of the technology stack.
- Strong experience in doing performance benchmarking testing for Big data technologies.
- Strong troubleshooting skills.
- Experience leading development life cycle process and best practices
- Experience in Big Data services administration would be added value.
- Experience with Agile Management (SCRUM, RUP, XP), OO Modeling, working on internet, UNIX, Middleware, and database related projects.
- Experience mentoring/training the engineering community on complex technical issue.
Desired Qualifications
Bachelor / Master's degree in Computer Science Engineering (M.E / M.Tech, B.E / B.Tech) and 11 to 15 years of previous experience in the job offered, or as a Principal Big Data Engineer, Lead Big Data Engineer, or a similar related role