Scale our clients Hadoop-based data processing pipelines -- including crawling, parsing, indexing, semantic analysis and language modeling, and analytics -- to handle and maintain complex processes in an efficient and reliable way.
Core architecture and development of new features and improved performance for high-traffic, high-availability web services.
Build infrastructure and tools to increase automation, improve efficiency of the engineering team, and maintain technical excellence in the code base.
BS/MS degree in Computer Science or related field.
Extensive background in algorithms and strong software architecture skills.
Essential to have experience with maintaining distributed systems at significant scale in a production environment.
Strong knowledge of web technologies, including details of HTTP, common web frameworks such as Tomcat or Django, networking, and web performance engineering.
Experience with map-reduce or large-scale data processing, Linux servering systems, and MySQL a plus.
ZipRecruiter - 20 months ago
copy to clipboard