Loading ...

Job content

Job Description for Principal Engineer, Data Infrastructure Team, Caastle

Here at CaaStle, we pioneered the clothing rental model and are now powering it for apparel retailers and fashion brands. We complement traditional ownership business models with rental subscription services and help companies meet the demands of consumers looking for greater flexibility in the way they experience fashion.

Fueled by data and the desire to connect the dots differently, our platform is driven by our highly skilled and collaborative teams. From proprietary technology to marketing and infrastructure services, we make it easy for growth-oriented companies to expand their reach in the retail market with our fully-managed, end-to-end solutions.

With a robust product pipeline and prospective partnerships, some of our current US and UK clients include: Bored Teachers, Destination Maternity, Express, Eloquii, Gwynnie Bee, Haverdash, L.K. Bennett, Moss Bros., Rebecca Minkoff, Rebecca Taylor, Scotch and Soda, Stylist LA, and Vince.

As we grow, CaaStle looks to welcome new team members who are excited to work in a dynamic and high-growth environment that celebrates innovation and analytical thinking. Our workplace consists of an inspiring community of people from unique and diverse backgrounds, and our culture is built upon a foundation of respect and camaraderie. Join us in changing the face of fashion.

About the Role:

Job Title: Principle Engineer, Data Infrastructure Team, Castle

We are looking for experienced systems professionals with proven experience in the management of large Hadoop Data Clusters. The candidate will be part of a Data Infrastructure group that works very closely with our Engineering and Operations teams. Data is central to all the core business decisions made at Caastle. Our Data cluster is a foundational component of our technology chain that delivers core business value to our customers. The ideal candidate will help us with effective management of our Cluster-based Data services (in terms of performance, fault-resilience, security, monitoring), and will lead our evolution into the next generation of Big Data technologies.

What you’ll do:

Primary responsibility: Health and growth of the Caastle Big Data Cluster.

  • Management and monitoring of system-level metrics (processing, memory, disk, network).
  • Management and monitoring of system-level metrics (processing, memory, disk, network).
  • Working with Product and Tech organizations on getting the maximum benefit out of centralized data.
  • Regular participation in Engineering and Architecture forums within Caastle.
  • Contribution to continual improvement in product quality and performance.
  • Participation in cross-platform design and architecture forums at Caastle.
  • Promotion of best practices in Data Systems management and operations.
  • Detailed Documentation.
  • Training members of the Engineering and Operations teams on the core functionality and features of the Caastle Data platform.
  • Keeping up with technology evolution and development in the Data universe.

We’d love for you to have:

A minimum of 7 years of experience in the following areas:

  • Proven track record of provisionining, configuring, managing and monitoring HDFS Data clusters.
  • Knowledge and expertise in configuration and management of non-stop, fault-resilient systems.
  • Knowledge and expertise in system-level performance monitoring and tuning of Data and Services running in HDFS environments.
  • Knowledge and expertise in Information and System security at multiple levels (File System, Data, Application, Cluster, Network).
  • Proven experience in the management of HDFS Database environments (e.g. Hive, Impala, HBase).
  • Proven experience in the management of messaging / stream processing technologies (e.g. Apache Kafka).
  • Proven experience in on-premise AWS setup and management.
  • Proven production-level experience of scripting languages on Linux systems.

Nice to have

  • Direct hands-on experience with Cloudera technologies.
  • Experience with workflow systems.
  • Some software development experience..


  • Master’s Degree in Computer Science from Tier-1 engineering colleges in India.
  • Alternatively, a Bachelor’s Degree in Computer Science with additional relevant experience.
Loading ...
Loading ...

Click to apply for free candidate


Loading ...
Loading ...


Loading ...
Loading ...