Acquire data from various sources and ensure its efficient absorption for the organization’s future use.
Address the organizational needs for accurate, relevant, and up-to-date data by converting raw data into an easy-to-understand format for analysis and reporting purposes.
Create new data values, convert them into useful information, and design solutions.
Requirements
More than three years of work experience in Java, Golang, or Python.
Experience with Kafka/Pub-Sub.
Experience with Data Transformation.
Experience in designing and developing solutions following "best practices" within a -large/complex environment.
Experience with stream processing technologies such as Apache Spark, Apache Kafka, Apache -Flink, Apache Beam, Apache Storm, etc.
Strong SQL, Python, and performance-tuning skills.
Capable of logical and physical database design, development, analysis, architecture, and modeling.
Experience in architecting multi-tier, distributed database applications.
Passionate about, Data, Big Data, and ML-learning new technologies.
Experience in designing and developing large-scale applications utilizing Big Data technologies.