We are Hiring
Become a Part of a Lively, Winning Team
With Umbrella, you will step into a nurturing, flexible and positive work environment that uplifts you with guidance, training, and continuous learning. We believe in open communication, appreciation of hard work, team bonding and celebrating together.

Job Description
- 3+ Year
- Anywhere in India
- Bachelor’s degree in Computer Science or a related stream
Anywhere in India
3+ Years
Bachelor’s degree in Computer Science or a related stream
Responsibilities
- Understand the problem statement, client requirements and design and build complex solutions using Programming languages and Big Data service platform.
- Translate business requirements into technical terms and drive team effort to design, build and manage technology solutions that solve business problems.
- Implementing and enhancing Big Data pipeline for Data Ingestion, Processing (ETL frameworks) & Consumption using Spark & HIVE.
- Work closely with technology on solutions to resolve identified production issues which are impacting existing infrastructure/solution covering data quality, data assurance, refresh timeliness, data security governance.
- Extend analytics support spanning across entire gamut of business (Customer Acquisition, Portfolio Management, New Product Development) .
- Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Required Skill Set and Experience
- Hands-on expertise in extracting and processing large volumes of data using Big Data technologies like Hadoop, Spark, Hive etc
- Strong coding skills in Python with Spark is a MUST.
- Handle the installation, configuration, and supporting of Hadoop.
- Write MapReduce coding for Hadoop clusters; help to build new Hadoop clusters.
- Converting hard and complex techniques as well as functional requirements into the detailed designs.
- Proficient in writing Spark RDD/DataFrames/SQL to power data for extraction, transformation, and aggregation from multiple file formats including JSON, CSV & other compressed file formats.
- Pre-processing of data using Pig, Hive, Spark Streaming.
- Must be good in writing complex SQL queries and aggregations.
- Strong understanding of OLAP/data warehousing concepts, dimensional models like star schemas, snowflake schemas.
- Strong hands-on working with streaming data using Flume, Kafka, and other related big data tools.
- Should have hands-on expertise on designing and developing reliable and robust ETL pipelines.
- Should have worked on NoSQL Databases like Mongo, HBase etc.
- Should have worked on at least one BI tool like Power BI, Tableau etc
- Working knowledge of AWS technologies like Redshift, Kinesis, Lambda, RDS, S3, Glue, Athena, Dynamo DB would be an added advantage.

Apply For Job
Build Your Career at Umbrella
Stay Ahead in the Game
Our employee focused environment provides the right opportunities to grow as an individual, acquire skills in cutting-edge technologies and help our employees always stay ahead in the game.