About the Role
The Kuala Lumpur office is the technology powerhouse of MoneyLion. We pride ourselves on innovative initiatives and thrive in a fast-paced and challenging environment. Join our multicultural team of visionaries and industry rebels in disrupting the traditional finance industry!
At MoneyLion, we measure everything and rely on data to guide our decisions, including both long-term strategies and day-to-day operations.
As a Senior Data Engineer, your main goal is to support data scientists, analysts, and software engineers by providing maintainable infrastructure and tooling they can use to deliver end-to-end solutions to business problems. You will work with terabytes to petabyte-scale data, in a complex data environment supporting multiple products and data stakeholders across the US, KL, and Armenia.
You will be responsible for designing and implementing an analytical environment using in-house and third-party tools, using Python and/or Java to automate data activities and enable efficient processing of data that is growing in both volume and complexity.
You will design and implement complex data pipelines and data models for analytical consumption.
You will work with Redshift, Snowflake, EMR, Kubernetes, Airflow, and more as the main tools of the job. You will write scalable and performant SQL queries running over billions of rows of data and help simplify these processing to enable insights to be more easily extractable from them.
You should have deep experience in designing and managing large datasets and pipelines to enable business use-cases. You should be an authority at designing, implementing, and operating solutions that are scalable, stable, and cost-efficient.
Key Responsibilities
Design, implement, operate, and improve the analytics platform
Design data solutions using various big data technologies and low latency architectures
Collaborate with data scientists, business analysts, product managers, software engineers, and other data engineers to develop, implement, and validate deployed data solutions.
Maintain the data warehouse with timely and quality data
Build and maintain data pipelines from internal databases and SaaS applications
Understand and implement data engineering best practices
Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
Mentor and provide guidance to junior engineers on the job
Qualifications
Expert at writing and optimizing SQL queries
Proficiency in Python, Java, or similar languages
Familiarity with data warehousing concepts
Experience in Airflow or other workflow orchestrators
Familiarity with basic principles of distributed computing
Experience with big data technologies like Spark, Delta Lake, or others
Proven ability to innovate and leading delivery of a complex solution
Excellent verbal and written communication - proven ability to communicate with technical teams and summarize complex analyses in business terms
Ability to work with shifting deadlines in a fast-paced environment
Desirable Qualifications
Authoritative in ETL optimization, designing, coding, and tuning big data processes using Spark
Knowledge of big data architecture concepts like Lambda or Kappa
Experience with streaming workflows to process datasets at low latencies
Experience in managing data - ensuring data quality, tracking lineages, improving data discovery and consumption
Sound knowledge of distributed systems - able to optimize partitioning, distribution, and MPP of high-level data structures
Experience in working with large databases, efficiently moving billions of rows, and complex data modeling
Familiarity with AWS is a big plus
Experience in planning day-to-day tasks, knowing how and what to prioritize and overseeing their execution#J-18808-Ljbffr