Add expected salary to your profile for insights
As a Principal DevOps Engineer supporting DataOps, you will play a crucial role in enabling efficient and reliable data pipeline building, data analysis, and data governance.
Your primary responsibility will be to ensure smooth data ingestion into the data lake or data warehouse, enabling downstream data consumption for Data Scientists, Analysts, and other stakeholders.
You will utilize various AWS services such as MSK/Kafka, S3, EMR or Glue, Lakeformation, Athena, QuickSight, and manage ingestion from different databases like MS SQL, MySQL, Oracle, and PostgreSQL.
KEY AREAS OF RESPONSIBILITIES
Collaborate with Data Engineers to design, implement, and optimize data pipelines.
Ensure efficient and reliable data ingestion processes from various sources into the data lake or data warehouse.
Implement data validation and quality checks to maintain data integrity throughout the pipeline.
Data Analysis Support:
Assist Data Scientists and Analysts in their routine analytical work by providing data access and query support.
Optimize data storage and retrieval mechanisms to enhance performance and efficiency.
Collaborate with stakeholders to identify and implement solutions for data analysis and reporting requirements.
Data Governance and Security:
Utilize AWS Lakeformation and other relevant tools to establish and enforce data governance policies.
Implement security measures and access controls to protect sensitive data.
Collaborate with stakeholders to ensure compliance with data privacy regulations.
FUNCTIONAL COMPETENCIES
Technical Expertise:
Strong expertise in AWS services such as MSK/Kafka, S3, EMR or Glue, Lakeformation, Athena, and QuickSight.
Proficient in database management systems like MS SQL, MySQL, Oracle, and PostgreSQL.
Familiarity with Change Data Capture (CDC) mechanisms and practices.
Solid understanding of data pipeline architecture, ETL processes, and data integration techniques.
Collaboration and Communication:
Excellent collaboration skills to work closely with Data Engineers, Data Scientists, Analysts, and stakeholders.
Strong communication skills to effectively convey technical concepts to both technical and non-technical audiences.
Ability to collaborate with cross-functional teams and promote a culture of knowledge sharing and continuous improvement.
Problem-solving and Troubleshooting:
Proven ability to identify and resolve complex technical issues related to data ingestion, processing, and analysis.
Experience in performance tuning, optimizing data storage and retrieval, and addressing scalability challenges.
Strong analytical and problem-solving skills to diagnose and troubleshoot system-level issues.
Automation and Infrastructure as Code (IaC):
Proficiency in infrastructure automation using tools like Terraform, CloudFormation.
Experience in managing and administering Kubernetes on-premises or EKS in AWS.
Experience with version control systems (e.g., Git) and CI/CD pipelines for automated deployments.
Understanding of Infrastructure as Code (IaC) principles to maintain reproducible and scalable environments.
Proficient in programming languages such as Python, Golang, Java.
Minimum Qualifications
Bachelor's Degree in Computer Science, Information Systems Technology, or Software Engineering.
More than 7 years of relevant experience in Information Technology, especially in the field of DevOps.
Relevant work experience in the Financial Services and/or Technology sectors would be an added advantage.
Excellent in both English and Bahasa Malaysia.#J-18808-Ljbffr