Sr. Data Platform Engineer
EA Digital - Redwood City, CA

This job posting is no longer available on Electronic Arts. Find similar jobs:Senior Data Platform Engineer jobs - EA Digital jobs

ENTERTAINING IS OUR PASSION We’re EA—the world’s largest video game publisher. You’re probably familiar with many of our titles—Madden, FIFA, The Sims, Need for Speed, Dead Space, Battlefield, and Star Wars, to name but a few. But maybe you don’t know how we’re committed to creating games for every platform—from social to mobile to console—to give our consumers that Anytime, Anywhere access they demand. What does that mean for you? It means more opportunities to unleash your creative genius, be inspired by those leading their fields, and ignite your path in any direction you choose.

The EADP Data Group is responsible for developing a new unified Big Data pipeline across all franchises at Electronic Arts. This platform will incorporate data collection, ingestion, processing, access and visualization all built on a modern, cloud based tech stack with best-in-class tools. The Data Group will provide the tools and platform which powers the future state of game development, marketing, sales, accounting and customer experience.

We are looking for seasoned developers who are interested in working on a large scale distributed data system from the ground up for one of the most creative, innovative companies in technology.

Responsibilities :
Help define and build a unified data platform across EA, spanning 20+ game studios as data sources

Develop infrastructure software that slice and dice data, using Hadoop and Map/Reduce

Design and Develop reporting systems that inform on key metrics, detect anomalies, and forecast future results

Develop complex queries to solve data mining problems

Write reliable and efficient programs scaling to massive (petabyte) datasets and large clusters of machines

Flexibility to work with both SQL and NoSQL solutions

Work closely with data modelers, business data analysts, and BI developers to understand requirements, develop ETL processes, validate results, and deliver to production

Analyze and improve efficiency, scalability, and stability of data collection, extraction, and storage processes

Required Skills

BS, MS, PhD in Computer Science or related technical discipline (or equivalent)

Significant experience working with large-scale systems and data platforms/warehouses

A solid foundation in computer science, with strong competencies in algorithms, data structures, and software design

Several years of software development experience, writing clean re-useable code, test-driven development, and continuous integration

Extensive experience with MapReduce, Hadoop, Hive, or other NoSQL stacks a strong plus

Fluency with Java, SQL, Perl/Python, or C++

Fast prototyping skills, familiarity with scripting languages such as bash, perl, awk, python

Experience working with columnar analytics databases or relational databases is a plus

Experience with data modeling and BI tools is a plus

Self-directed and capable of working effectively in a highly dynamic fast-paced environment

Great communication, time-management, and teamwork skills

  • LI-RM1