The Data Engineer will be responsible for building, maintaining, and optimizing scalable data pipelines and infrastructure to support analytics, reporting, and machine learning initiatives.
Key Responsibilities
Design and develop ETL/ELT pipelines.
Build and manage data lakes, warehouses, and data marts.
Ensure data quality, governance, security, and reliability.
Integrate data from multiple internal and external sources.
Develop batch and real-time data processing solutions.
Work with business teams to define data requirements.
Optimize database performance and data storage solutions.
Support BI and analytics teams with clean, structured datasets.
Implement monitoring and alerting for data systems.
Required Skill Set
Strong SQL skills and experience with relational databases.
Proficiency in Python, Java, or Scala.
Experience with ETL tools such as Talend, Informatica, Apache NiFi, or Airflow.
Knowledge of big data technologies such as Hadoop, Spark, Kafka, Hive.
Familiarity with cloud data services such as AWS Redshift, Glue, S3, Azure Data Factory, BigQuery.
Experience with Snowflake, Databricks, or similar platforms.
Understanding of data modeling, warehousing, and governance.
Experience with API integration and data ingestion.
Qualifications
Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field.
3–8 years of experience in data engineering.
Certifications in cloud or big data technologies are preferred.