Software Engineer, Data Infrastructure

San Francisco, United States

Why are Data Infrastructure Engineers important at Airbnb?

Data Infrastructure Engineering builds distributed components, systems, and tools that power decisions at Airbnb. No other travel service participates so broadly in the travel experience from discovery, to booking, to crafting experiences during the stay, and assessing quality of trips. This gives us an incredibly rich dataset to collect, transform, and analyze in order to improve the effectiveness of our marketplace and create delight for guests and hosts.

We leverage existing open source technologies like Kafka, Hadoop, Hive, Presto, Spark, and also write our own. As a member of our team you would spend time designing and growing our existing infrastructure, democratizing data access at the company, and promoting the correct use of data and analytics at the company.

What are examples of work that Data Infrastructure Engineers have done at Airbnb?

  • Experiment Reporting Framework: a tool we are building internally to help analyze the results of experiments run on Airbnb, from test setup to front-end interface to quickly identify results. The DI team has built the entire tool from scratch.
  • Ad hoc query tool: a front end query authoring tool that has been organically adopted by every team at Airbnb and has helped to democratize data access in the company. This was built entirely in house.
  • Core Data Namespace: creating a sanitized set of tables that are well-understood, trusted sources of truth for both facts and dimensions. Running a custom sanity checking system atop these tables to ensure data populated each day is healthy.
  • Chronos: our beefed up version of "cron" that we developed and then open sourced. 

The following experience is relevant to us:

  • Working with data at scale, specifically with distributed systems. 
  • Strong scripting ability
  • Working knowledge of relational databases and query authoring (SQL)
  • Love to use and develop open source technologies like Kafka, Hadoop, Hive, Presto, and Spark
  • JVM experience preferred
  • Rigor in A/B testing, automated testing, and other engineering best practices
  • Deep understanding of relational database schema design and performance tuning
  • BS/MS/PhD in Computer Science or related field is preferred


  • Stock
  • $2000 yearly employee travel coupon
  • Competitive salaries
  • Paid time off
  • Medical, dental, & vision insurance
  • Life & disability coverage
  • 401K
  • Flexible Spending Accounts
  • Apple equipment
  • Daily breakfast, lunch, and dinner
  • Weekly happy hour

Apply Now