At CreativeLive, we strive to help millions of people all over the globe live their dreams in career, hobby, and life. We have streamed more than two billion minutes of educational content to students on every continent, and we’re just getting started.
We’re looking for candidates who share our vision to join our dynamic team and build the future of creative education.
Come make a difference with your unique talents. There’s a creator in all of us.
CreativeLive is looking for a talented** Senior or Mid-Level Software Engineer** to join our growing technical team in our San Francisco studios.
Your challenge: build scalable systems that tame a firehose of real-time metrics to be used by our data scientists, marketers, commerce, merchandisers, accountants, executives, and content production teams.
Our current data pipeline runs in **Docker** and captures batch dimensional data from **Mongo** and real-time events from **Kafka**, and does automated ETL to **Redshift.** We are actively investigating other pipelines and warehousing alternatives to serve our future needs.
We support a variety of tools for visualization and analysis, including Tableau, Mode, Mixpanel, Google Analytics, Indicative, and internal dashboards in Grafana.
Although this position is focused on supporting product analytics via our data pipeline, we are particularly interested in candidates that will **fearlessly venture beyond the bounds of the data tier** to instrument and optimize related code in the runtime stack.
Some of the things that would make you a great fit for our team:
You embody creativity, positivity, and exploration, and are inspired by our mission.
You are an active contributor in the technical community.
You care deeply about code quality, performance, and execution.
You understand that software engineering is a team sport.
You're passionate about learning and sharing what you've learned.
What we're looking for:
3+ years professional experience as a software engineer.
Expertise in modern data engineering, from relational to NoSQL to big data (streaming and batch).
Total comfort with Unix/Linux.
Mastery of basic Computer Science fundamentals.
Enough Java experience to tame Kafka and Zookeeper.
Bonus points for:
A 4-year accredited technical degree, or equivalent professional experience (not just a coding bootcamp, please)
Experience with Hadoop, Hive, Spark, etc.
Experience with NLP, clustering, sentiment analysis, etc.
Prior work experience in an agile environment
Strong programming skills in a variety of languages
Mad skills with any tech in our stack: Node, Angular, Mongo, AWS, Docker, Kafka, Redshift, etc.