Data engineering is your passion and you love to make sure that data can be turned into a valuable asset? You already have solid experience in building reliable data pipelines from source to destination? You have an engineering mindset and apply software development principles to your work - pair programming, code reviews, as well as a high test coverage is your standard way of working? You love to work in agile teams and deliver in short iterations valuable increments? If this sounds interesting to you, then it is YOU we need in our team!
That’s what it’s about:
● Support your team to reach the next level with active knowledge transfer.
● Be part and influence the introduction of new tools, methodologies and techniques to deliver high quality work.
● Design, develop, optimize, and maintain data architecture and pipelines that adhere to ETL principles and meet business goals.
● Solve complex data problems to deliver insights that helps our business to achieve their goals.
● Create data products for analytics and data scientist team members to improve their productivity.
● Implement & deploy emerging tools and process for analytic data evaluation in order to improve our productivity as a team.
● Implementation of data mappings and design of data flows in an agile environment.
● Automation and acceleration of data flows.
● Further development of data and analytics platforms.
● At heart you are a passionate team player, who respects the opinions of his colleagues, as:
● You know how to be the best team player.
● You master challenges with your creative approach.
● You have an eye for details and ace in documenting your work.
● You base your gut feeling and decisions on metrics.
● You are very structured and you set the benchmark for quality.
● You have at minimum 2 years experience as a data engineer or equivalent experience
● You have a deep understanding of various data stores, both structured and unstructured, and their capabilities (i.e. distributed filesystems, SQL and noSQL data stores).
● You are comfortable working with analytics processing engines (i.e. Spark, Flink).
● You have worked with many different storage formats and know when to use which (i.e. JSON, Parquet, ORC).
● You know exactly how to structure data pipelines for reliability.
● You are open for new technologies.
● You have a bachelor in CS or a relevant subject such as mathematics or physics.
● You speak fluently English (maybe even a bit German).