Work Location: India
Duration: Full Time
Experience: 6+ Years
· Create and maintain optimal data pipeline architecture
· Assemble large, complex data sets that meet functional / non-functional business requirements.
· Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
· Build the required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
· Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data warehouse needs.
· Work with data and analytics experts to strive for greater functionality in our data warehouse.
Proven working experience as a Data Engineer (5+ years)
Clear understanding of Teradata data warehouse architecture
Good Experience in scripting (preferably Python)
Very strong in Teradata programming and optimization techniques
Hands on experience in Stored Procedures, Teradata utilities like TPTExport / TPTLoad / fastLoad etc.
Strong Data warehouse and ETL datastage concepts
Experience creating technical designs
Familiarity with Hadoop and enthusiasm to learn new technologies
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
A successful history of manipulating, processing and extracting value from large disconnected datasets.
Familiarity with our industry is a plus.
Excellent communication skill in terms of both verbal and written
Bachelor degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
Thanks & Regards