Job Description As a Data Engineer for Big Data Solutions you will actively collaborate in the development of IT pipelines to transfer data between different systems in the international semiconductor operations network (SO) This includes working on ETL (Extract – Transfer – Load) development for Hadoop ecosystem, SQL and other related technologies Support in developing and programming a custom ETL framework in Python Work with customer departments and senior management to understand business objectives and requirements of the organization and develop solutions Collaborate with IT departments from different plants worldwide to roll out our developed solutions to the semiconductor IPN Be part of an agile development team and take responsibility for tasks defined by the product owner Qualifications University Degree (Bachelor/Master) in Information Technology or comparable qualifications Candidate posses 3-5 years similar work experience in the same field Good communication skills (Verbal and Written) especially in meetings and reviews with all levels and departments Able to work in an intercultural team Strong interest in modern technologies, agile mindset and ability to work under pressure Reliability and flexibility to work in a pioneer team Knowledge of programming language like Python, Java, C++ Experience with working in a Hadoop Ecosystem (eg.
PySpark, Airflow, Kafka) Structured and independent way of working, analytical skills to grasp complex interrelationships Willingness to take over responsibility Experience with project management tools and processes Technical background within semiconductor business Fluent in English (written and spoken), German skills are a plus Additional Information Leave Entitlement e.g: Annual Leave, Medical Leave and etc Company Insurances and etc