- Design, build, and maintain efficient, scalable, and reliable ETL/ELT data pipelines across banking systems.
- Write and optimize complex SQL queries for large-scale transactional and analytical data processing.
- Work with Spark (batch/streaming) and Hive for data ingestion, transformation, and aggregation.
- Collaborate with business analysts, data scientists, and stakeholders to understand data needs and deliver solutions.
- Ensure data quality, governance, and compliance with banking regulations.
- Monitor, troubleshoot, and enhance data workflows to meet performance and availability standards.
- Support migration and integration of data from traditional databases into modern big data platforms.
Requirements
- 5–8 years of experience as a Data Engineer or similar role in the banking/financial services domain.
- Strong expertise in SQL (query tuning, stored procedures, performance optimization).
- Hands-on experience with Spark (medium proficiency) and Hive (medium proficiency).
- Good understanding of data warehousing concepts, ETL workflows, and data modeling.
- Familiarity with banking data domains such as payments, loans, compliance, or customer data is a plus.
- Experience with Unix/Linux scripting and version control (Git).
- Ability to work in Agile/DevOps environments and collaborate with cross-functional teams.