Description & Requirements

The Team

Bloomberg runs on data! It's our business and our product. Financial institutions need timely, accurate data to capture opportunities and evaluate risk in fast-moving markets, from the biggest banks to elite hedge funds.

Bloomberg's application teams face complex challenges - large-scale data storage, low-latency retrievals, high-volume requests, and high availability over a distributed computing and storage environment. Hadoop Infrastructure's mission is to provide a multi-tenant, observable, and highly available infrastructure backed by the open-source Apache Hadoop platform (HDFS, HBase, Hive, Oozie, YARN/Spark, etc.) to support large-scale data applications across Bloomberg. The Hadoop Infrastructure team manages clusters that handle 10s of PB of storage spread across 1000s of servers running 100s of billions of requests per day and running 10s of thousands of jobs hitting 100s of thousands of tables every day. We also provide standard methodologies and domain expertise on Hadoop services to applications across various product domains at Bloomberg.

Who are you?

  • You are a dedicated and motivated engineer interested in building and managing large-scale distributed systems, looking for a tight-knit, collaborative team.
  • You are an innovative problem solver who enjoys working in multiple roles and thrives in a fast-paced environment.
  • You want to make a significant impact and contribute to open-source software.

We’ll trust you to:

  • Advance how tenants from multiple product domains leverage Hadoop Infrastructure services to meet their goals
  • Provide and improve our capabilities to migrate and support a massive data and compute footprint (10s of PB) to newer versions.
  • Improve our tenants' user experience when securely interacting with powerful underlying infrastructure frameworks.
  • Understand and improve the usability, reliability, and scalability of open-source Apache Hadoop services to optimize for the needs of Bloomberg application teams.

You’ll need to have:

  • 4+ years of demonstrated experience working with an object-oriented programming language (Java) and associated technologies (eg. Spring/JMX/JDBC)
  • A degree in Computer Science, Engineering or similar field of study or equivalent work experience.
  • 3+ years of experience in the Hadoop ecosystem and related technologies (HBase, Hive 3, HDFS, Spark, Oozie)
  • Knowledge of modern development methodologies and tools (Jenkins/Maven/Jira).
  • Solid understanding of Linux Operating system, Shell scripting, OS troubleshooting
  • Strong problem-solving and communication skills.

We'd love to see:

  • Experience with distributed systems architecture and system design.
  • Knowledge of Ansible.
  • Experience working with open-source software/community.
Salary Range = 160000 - 240000 USD Annually + Benefits + Bonus

The referenced salary range is based on the Company's good faith belief at the time of posting. Actual compensation may vary based on factors such as geographic location, work experience, market conditions, education/training and skill level.



We offer one of the most comprehensive and generous benefits plans available and offer a range of total rewards that may include merit increases, incentive compensation, [Exempt roles only], paid holidays, paid time off, medical, dental, vision, short and long term disability benefits, 401(k) +match, life insurance, and various wellness programs, among others. The Company does not provide benefits directly to contingent workers/contractors and interns.

Is a Remote Job?
No

Bloomberg unleashes the power of information and technology to bring clarity to a complex world.

Global customers rely on us to deliver accurate, real-time business and market-moving information that...

Apply Now