Timely analyses of Big Data made possible by the successful installation, configuration and administration of Hadoop ecosystem components and architecture.
Design and develop components of big data processing for Data Archive and Research Toolkit.
Capable of processing large sets of structured, semi-structured and unstructured data and supporting systems application architecture.
Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review.
Familiar with data architecture including data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing. Experience optimizing ETL workflows.
Two years’ experience developing, installing, configuring, testing Hadoop ecosystem components.