URBN: Nuuly: Senior Data Engineer (Havertown)


: $83,690.00 - $125,470.00 /year *

Employment Type

: Full-Time


: Information Technology

Loading some great jobs for you...

URBN is a portfolio of global consumer brands comprised of Urban Outfitters, Anthropologie, Free People, BHLDN, Terrain and the Vetri Family. We are passionate, creative and entrepreneurial. We create unique retail experiences with an eye toward creativity and a singular focus on pleasing our customer. Learn more about us at

Roles and Responsibilities:
- Partner with the Data Science, Analytics and Engineering teams to plan, design, create, maintain, use, and enhance our Kafka and GCP based data management capability and structure. Develop sustainable data management solutions to enable analytics and implementation of insights.
- Identifies potential data solutions, translates data solutions into design/models and specifications. Responsible for discovery, analysis, and scope of proposed data management capability, outcomes, and structure. Ensures that all work includes sufficient documentation - and is easy to maintain, use, and reuse.
- Understands how technologies are applied to solve big data challenges to support analytics needs. Stays abreast with latest industry/technology development for data management solutions.
- Develops knowledge on business and data environment, new data sources, data technology and solutions, and industry trends. Partners with internal stakeholders to enhance understanding of business functions and informational needs.
- Designs and conducts training sessions on data catalog, quality, and models. Provides supporting documentation to team members and business partners as needed.

- Master's degree, or equivalent experience, in Computer Science, Engineering, Mathematics or a related field
- 8+ years of experience of IT platform implementation in a highly technical and analytical role.
- 5+ years experience of Data Analytics platform implementation, including 3+ years of hands-on experience in implementation and performance tuning Hadoop/Spark/Kafka Streams implementations.
- Understanding of Apache Kafka and the Data Analytics ecosystem. Experience with one or more relevant tools (Kafka, Zookeeper, Elasticsearch, Avro).
- Experience developing software code in one or more programming languages (Java, Scala, Kotlin, Python, etc).
- Current hands-on implementation experience required
- Hands-on experience leading large-scale global data warehousing and analytics projects.
- Understanding of database and analytical technologies in the industry including MPP and NoSQL databases, Data Warehouse design, BI reporting, and Dashboard development.
- Implementation and tuning experience specifically using Big Query (BQ).

The above information has been designed to indicate the general nature and level of work performed by employees within this classification. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications required of employees assigned to this job.

Associated topics: data administrator, data analyst, data center, data integrity, data manager, data scientist, data warehouse, erp, mongo database, mongo database administrator * The salary listed in the header is an estimate based on salary data for similar jobs in the same area. Salary or compensation data found in the job description is accurate.

Launch your career - Create your profile now!

Create your Profile

Loading some great jobs for you...