Who is Plenti?
Plenti is a fintech lender, providing faster, fairer loans by leveraging its smart technology. Plenti is a dynamic and innovative business that is growing strongly. By continuing to deliver better customer experiences, Plenti is taking market share from incumbent players in the personal lending, renewable energy, and automotive finance markets.
We are a fast moving and ambitious business that seeks to recruit smart and capable people, who can take ownership of their role to help the business thrive. With over 250 people based in Australia, Plenti is of a size where everyone can make a difference in their role and help us realise our very big ambitions as a team, as we go about building Australia’s best lender.
Plenti is a founder led business that launched in 2014, listed on the ASX since 2020 with annual revenue of over $250 million and a loan portfolio of over $2.5 billion.
About the role:
At Plenti, data is at the heart of how we make decisions, build products, and understand our customers.
We’re looking for a Data Engineer who loves building things that just work—clean, reliable data pipelines that power analytics, reporting, and machine learning across the business.
You’ll be joining a modern data team working with cloud-native tools like AWS, Databricks, dbt, and Kubernetes. This is a hands-on engineering role where your work directly improves data quality, speed, and trust across the business.
Key responsibilities:
- Pipeline Development: Design and implement reliable data (ELT/ETL) pipelines using Airbyte ( for ingestion and dbt for data modeling and transformation.
- Orchestration & Maintenance: Configure and monitor software-defined data assets using Dagster. Troubleshoot pipeline failures and ensure SLAs for data freshness are met.
- Data Lake Management: Develop and optimize Databricks (Delta Lake) tables. Write efficient Spark/SQL queries to handle large datasets.
- Code & Scripting: Write clean, maintainable Python scripts for custom data extraction or utility tasks. Contribute to the team's codebase via Git/GitHub.
- Infrastructure Support: Monitor containerized workloads on AWS EKS (Kubernetes). Assist in debugging pod failures and resource bottlenecks.
- Data Quality Assurance: Implement tests (dbt tests, Great Expectations) to catch data anomalies early and ensure trust in our data products.
- Exposure to AI/ML technologies, with the ability to integrate or leverage AI models to enhance data processing, automation, or system intelligence.
- Participate in on-call rotation for critical incidents and drive post-mortems to prevent recurrence.
About you: You’re an experienced Data Engineer who’s confident working across modern data platforms and enjoys building scalable, reliable data solutions in cloud environments.
- 5+ years’ experience in a Data Engineering role
- Strong SQL skills with experience in complex queries, performance tuning, and data modelling (dimensional models, wide analytical tables, and curated “gold” datasets)
- Strong Python skills for data manipulation, automation, and scripting
- Hands-on experience with dbt (models, testing, and documentation)
- Exposure to Databricks and reverse ETL tools
- Experience with data orchestration tools such as Dagster, Airflow, or Prefect
- Working knowledge of cloud platforms, ideally AWS (S3, EC2, etc.), with exposure to Azure or GCP also valued
- Familiarity with Git and modern engineering practices (CI/CD, code reviews, Infrastructure as Code basics)
- Experience using AI-assisted development tools (e.g. GitHub Copilot, Cursor)