KPMG
- Should have experience in Data and Analytics and overseen end-to-end implementation of data pipelines on cloud-based data platforms.
- Strong programming skills in Python, Pyspark and some combination Java, Scala (good to have)
- Experience writing SQL, Structuring data, and data storage practices.
- Experience in Pyspark for Data Processing and transformation.
- Experience building stream-processing applications (Spark steaming, Apache-Flink, Kafka, etc.)
- Maintaining and developing CI/CD pipelines based on Gitlab.
- You have been involved assembling large, complex structured and unstructured datasets that meet functional/non-functional business requirements.
- Experience of working with cloud data platform and services.
- Conduct code reviews, maintain code quality, and ensure best practices are followed.
- Debug and upgrade existing systems.
- Nice to have some knowledge in Devops.
QUALIFICATIONS
- Bachelor’s degree in computer science or related field
- Experience in Snowflake and Knowledge in transforming data using Data build tool.
- Strong programming skills in Python, Pyspark and some combination Java, Scala (good to have)
- Experience in AWS and API Integration in general with knowledge of data warehousing concepts.
Excellent communication and team collaboration skills
To apply for this job please visit ejgk.fa.em2.oraclecloud.com.