Purdue University - Department of Computer Science - Hive Seeking Backend and Data Engineers
Skip to main content

Hive Seeking Backend and Data Engineers

Hive is a full-stack deep learning platform helping to bring companies into the AI era. We take complex visual challenges and build custom machine learning models to solve them. For AI to work, companies need large volumes of high quality training data. We generate this data through Hive Data, our proprietary data labeling platform with over 500,000 globally distributed workers, generating millions of high quality pieces of data per day. We then use this training data to build machine learning models for verticals such as Media, Autonomous Driving, Security, and Retail. Today, we work with some of the largest companies in the world to redefine how they think about unstructured visual data. Together, we build solutions that incorporate AI into their businesses to completely transform industries.

We are fortunate that investors like Peter Thiel (Founders Fund), General Catalyst, 8VC, and others see Hive’s potential to be groundbreaking in AI business solutions. We have over 100 rock stars globally in our San Francisco and Delhi offices.

Check out our case studies here: https://thehive.ai/case-studies

Backend Engineer Role: https://jobs.lever.co/castleglobal/dc4779c1-2a34-4b44-b641-3026f8e7b373

In order to execute our vision, we need to grow our team of best-in-class engineers. We are looking for developers who are excited about launching new products and features into production, who can work autonomously and aren’t afraid to try new technologies, and who don’t back down from the challenges of scale. Our ideal candidate has experience building core services and web-based APIs from the ground up, cares just as much about the product itself as the technology that powers it, and is capable of both structuring and writing clean, maintainable code.

Backend Engineer Responsibilities

Design, implement or improve features in a variety of backend systems including our REST APIs, microservices, data ingestion and processing systems, and distributed task/job processing systems
Write and maintain scalable, performant code that can be shared across platforms
Meaningfully contribute to the product and core backend systems by suggesting and executing improvements
Improve engineering standards, tooling, and processes
Practice test-driven development
Debug production issues across services and multiple levels of the stack

Backend Engineer Requirements

You have a Bachelor's Degree in computer science or a related field
You have a few years of experience building web applications
You have experience or a strong interest in writing applications in Node.js
You have experience implementing highly-available distributed systems/microservices
You have experience building scalable backend APIs
You have experience working with relational databases, Postgres preferred
Understanding monitoring and alerting platforms is a plus
You strongly believe in high code quality, automated testing, and other engineering best practices
You have attention to detail and a passion for correctness
You are comfortable learning new technologies and systems
You have strong interpersonal and communication skills with a bias towards action

Data Engineer Role: https://jobs.lever.co/castleglobal/20beddea-8ceb-4dd3-8de4-ea554b77e7db

In order to execute our vision, we need to grow our team of best-in-class data engineers. We are looking for developers who conduct impeccable data practices and implement high quality data infrastructures. We value hard workers who are comfortable improvising solutions to a stream of big data challenges while building a system that stands the test of time. Our ideal candidate has experience building data infrastructure from the ground up, contributes innovative ideas and ingenious implementations to the team, and is capable of planning out scalable, maintainable data pipelines. As a data engineer, you would at first work primarily on our Hive Media product, taking real-time data from hundreds of television streams and turning them into a combination of real-time and scheduled outputs, especially our signature ads feed. Your work would improve the quality of our results while reducing computational cost and latency. Expect truly novel challenges.

Responsibilities

Writing scheduled Spark pipelines that perform sophisticated query plans on the entirety of our datasets
Writing real-time pipelines that execute complex operations on incoming data
Synchronizing large amounts of data between unstructured and structured formats on various data sources
Creating testing and alerting for data pipelines
Building out our data infrastructure and managing dependencies between data pipelines
Determining and implementing metrics that provide visibility into our data quality

Requirements

You have an undergraduate and / or graduate degree in computer science or a similar technical field, with a sound understanding of statistics
You have 1-2 years of industry experience as a data engineer
You have hands-on experience doing ETL and have written data pipelines in either Spark or MapReduce
You have a sound understanding of SQL or CQL
You have worked with data lakes such as S3 or HDFS
You have worked with various databases, such as Postgres, Cassandra, or Redshift before, and understand their pros and cons
You have a working knowledge of the following technologies, or are not afraid of picking them up on the fly: Mesos, Chronos/cron, Marathon, Jenkins
You are fluent in at least one scripting language (preferably NodeJS or python) and one compiled language (such as Scala, Java, or C)
You have great communication skills and ability to work with others
You are a strong team player, with a do-whatever-it-takes attitude

What We Offer You

We are a group of ambitious individuals who are passionate about creating a revolutionary machine learning company. At Hive, you will have a significant career development opportunity and a chance to contribute to one of the fastest growing AI startups in San Francisco. The work you will do here will have a noticeable and direct impact on the development of Hive.
Our benefits include competitive pay, equity, health / vision / dental insurance, catered lunch and dinner, and a corporate gym membership.
Thank you for your interest in Hive.

If you're interested in applying, please reach out to Kristine at kristine@thehive.ai with a copy of your resume.

Last Updated: Jan 24, 2019 4:20 PM

Department of Computer Science, 305 N. University Street, West Lafayette, IN 47907

Phone:(765) 494-6010 • Fax: (765) 494-0739

Copyright © 2018 Purdue University | An equal access/equal opportunity university | Copyright Complaints

Trouble with this page? Disability-related accessibility issue? Please contact the College of Science.