BloomReach is a startup located in Mountain View. We have a top-notch
team from Google, Amazon, Groupon, VMware, Yahoo!, Cisco and other
successful Valley startups. We’re graduates of Stanford, Carnegie
Mellon, Berkeley, MIT, UIUC - Urbana-Champaign, UT Austin, Princeton,
Dartmouth, IITs, Harvard Business School and many more! Well-funded,
backed by Bain Capital Ventures, Lightspeed Ventures, and NEA
Ventures. We are building a "web relevance engine" - a web
scale effort to make great content on the internet discoverable. With
a talented team of engineers, we have built products that are
processing web-scale data, handling millions of page views a day, and
generating meaningful revenue.
We are looking for talented engineers with expertise in web search,
data mining, machine learning, algorithms, natural language processing
and/or large scale systems who want to be at a rapidly growing
company. If you're really smart, don't have all of the relevant domain
experience, but have great coding skills, believe you have the
technical aptitude to learn fast, and have a passion for a start-up -
let us know.
The Engineer's primary role will be processing millions of web
pages, extracting interesting pieces of data, analyzing the structure
of the web, and finding relevant correlations.
Design and build software to tackle difficult, large-scale machine
learning/information retrieval problems. Build large scale, backend
infrastructure for low latency, high throughput applications.
Additionally, you will work with rest of the engineering team in
scaling machine learning algorithms and deploying them on clusters. Be
ready to lead or help with other product design and engineering projects!
Minimum Job Requirements:
Currently pursuing a BS/MS/PhD in Computer Science or
related field with a plan to graduate in Dec 2013 or Spring
Solid programmer and significant development
experience in a Linux environment
information retrieval and machine learning at web scale
BloomReach - 5 months ago
BloomReach helps its customers manage potential duplicate content by creating clusters of equivalent pages and nominating a...