Job DescriptionAs a Data Engineer for Big Data Solutions you will actively collaborate in the development of IT pipelines to transfer data between different systems in the international semiconductor operations network (SO)This includes working on ETL (Extract – Transfer – Load) development for Hadoop ecosystem, SQL and other related technologiesSupport in developing and programming a custom ETL framework in PythonWork with customer departments and senior management to understand business objectives and requirements of the organization and develop solutionsCollaborate with IT departments from different plants worldwide to roll out our developed solutions to the semiconductor IPNBe part of an agile development team and take responsibility for tasks defined by the product ownerQualificationsUniversity Degree (Bachelor/Master) in Information Technology or comparable qualificationsCandidate posses 3-5 years similar work experience in the same fieldGood communication skills (Verbal and Written) especially in meetings and reviews with all levels and departmentsAble to work in an intercultural teamStrong interest in modern technologies, agile mindset and ability to work under pressureReliability and flexibility to work in a pioneer teamKnowledge of programming language like Python, Java, C++Experience with working in a Hadoop Ecosystem (eg.
PySpark, Airflow, Kafka)Structured and independent way of working, analytical skills to grasp complex interrelationshipsWillingness to take over responsibilityExperience with project management tools and processesTechnical background within semiconductor businessFluent in English (written and spoken), German skills are a plus Additional InformationLeave Entitlement Annual Leave, Medical Leave and etcCompany Insurances and etc