- Design, develop, test, deploy, support, enhance data integration solutions seamlessly to connect and integrate enterprise systems in our Enterprise Data Platform.
- Innovate for data integration in Apache Spark-based Platform to ensure the technology solutions leverage cutting edge integration capabilities.
- Experience with ETL, data pipeline creation to load data from multiple data sources.
- 4+ years working experience in data integration and pipeline development with BS degree in CS, CE or EE.
- 2+ years of Experience with AWS Cloud on data integration with Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, MongoDB/DynamoDB ecosystems
- Strong real-life experience in python development especially in PySpark in AWS Cloud environment.
- Design, develop test, deploy, maintain and improve data integration pipeline.
- Experience in Python and common python libraries.
- Strong analytical experience with database in writing complex queries, query optimization, debugging, user defined functions, views, indexes etc.
- Strong experience with source control systems such as Git, Bitbucket, and Jenkins build and continuous integration tools.
- Databricks, Redshift Experience is a plus.
*Training provided if experience does not match.