Hi,
Greetings,
We do have openings for Data Architect@ Melbourne,Australia.
Job Desciprtion:
Experience of developing data pipelines using Azure Data Factory and Pyspark with
Azure Databricks.
Working experience with end to end data pipeline build to take data from source
systems to ADLS ensuring the quality and consistency of data is maintained at all
times.
Experience with Apache Kafka / Azure Event Hub for use with streaming data / event- based data to process data into ADLS Gen2
Experience with other Open Source big data products Hadoop
Experience working in a Dev/Ops environment with tools such as Azure DevOps
Experience working in integrating ADF pipelines with API Interfaces, Azure Log
Analytics.
Mandatory Skills:
Extensive experience in developing data ingestion using Pyspark / Databricks / ADF into Azure Data Lake Gen2.
Desired Skills:
Azure Data Factory
Databricks
Pyspark
Azure Event Hub
Azure Log Analytics
Postgres.
If you are interested kindly share your updated resume to srujan@burgeonits.com