Use cutting edge technologies and high performance computing platforms like high-end GPUs and TPUs to make our software infrastructure both on our robots and in the cloud fast and easy to use. Our Infrastructure Team’s task is to create and maintain the underlying infrastructure and cloud services for these platforms.
Our robotic systems produce a vast amount of data both from their input sensors and from their internal behavior. We are developing software to collect and parse this data then we store it and use it for deep learning and data science. In the end, this data will teach our robotic systems, to move around both in the real world and in simulation.
A high-paced environment - we don't care about certificates or corporate accreditation, we only care about the ability of setting up infrastructure like compute servers, GCP/AWS interfaces, automation tools, databases and the ability to write code for them if needed.
- Work with the Engineering Team to develop our back-end systems for large data sets
- Design and manage infrastructure, cloud solutions and production services
- Develop the interface layer between our infrastructure and higher-level components (robotic systems, machine learning libraries, visualization, etc.)
- Integrate in-house solutions with 3rd party tools such as AWS and GCP
- Report and present software development including status and results clearly and efficiently, verbally and in writing
- Write and maintain documentation for the code you write
- Bachelor’s degree in Computer Science, Computer Engineering or a related technical field, or equivalent experience.
- Strong knowledge of cloud services and tools, especially on the GCP and AWS platform
- Familiarity with software collaboration tools (git, Jira, etc.)
- Hands-on experience with containerization tools such as Docker
- Ability to quickly learn new tools or services
- Experience with at least one (preferably more) orchestration/deployment tool such as Puppet, Chef, Ansible, Kubernetes
Plus points for any of these
- Experience in C++ or Go
- Networking experience (TCP/IP, OSI model, LTE/WiMAX)
- Familiarity with monitoring (Prometheus, Graphite) and logging (ELK, fluentd) tools
- Experience using machine learning platforms, tools and libraries (such as Tensorflow, Caffe, Chainer)
- Experience using log and data processing pipelines (Logstash, Elasticsearch, etc.)
- Experience using monitoring tools (CloudWatch, CloudTrail, Graphana, Kibana, CloudChecker)
- Large-scale database administration experience
- Interest in machine learning and AI
- Contributions to Open Source projects