Job Description

- Design and implement distributed data processing pipelines using Spark (RDD, DataFrame, Datasets, Streaming) - Programming in Core Java. Understanding of object-oriented design principles. - Design and implement end to end solution.

Requirements

- Experience working with open source NOSQL technologies such as Cassandra, MongoDB, Redis - Familiar with RDBMS, ETL and Data warehouse technologies. - Experience in engineering data pipelines using Big Data technologies such as Kafka, NiFi, Logstash and etc. - Experience in building Elastic Cluster and Elasticsearch index configuration options, sharding, sharing, partitioning, aliases, watchers, etc

Employment Type

  • Full Time

Details

Employment type

  • Full Time

Educations

برای مشاهده‌ی شغل‌هایی که ارتباط بیشتری با حرفه‌ی شما دارد،