We are Hiring
Become a Part of a Lively, Winning Team
With Umbrella, you will step into a nurturing, flexible and positive work environment that uplifts you with guidance, training, and continuous learning. We believe in open communication, appreciation of hard work, team bonding and celebrating together.

Job Description
- 7+ Year
- Anywhere in India
- Bachelor’s degree in Computer Science or a related stream
Anywhere in India
7+ Years
Bachelor’s degree in Computer Science or a related stream
Responsibilities
- Data Lake implementation using Big Data technologies.
- Understand Data Lake requirements and transform those to design and build.
- Propose the new ideas/automation to reduce manual efforts.
- Handle Hadoop installation, configuration, and support.
- Write MapReduce coding for Hadoop clusters; help to build new Hadoop clusters.
- Convert hard and complex techniques as well as functional requirements into the detailed designs.
- Pre-processing of data using Pig, Hive, Spark Streaming.
- Lead and mentor team members.
- Work on technical resolution for incidents and identify technical root cause.
- Engage with key stakeholders (Business, Markets, Dev's, Vendors).
- Ensure team code is compliant with code quality and standards.
Required Skill Set and Experience
- At least 7+ years overall IT experience with 5 years in relevant area.
- Should have excellent databases and Data Analytics knowledgeShould have excellent databases and Data Analytics knowledge.
- Hands-on expertise in extracting and processing large volumes of data using Big Data technologies like Hadoop, Spark, Hive etc.
- Strong coding skills in Python/Java/Scala with Spark is a MUST Proficient in writing Spark RDD/Data Frames/SQL to power data for extraction, transformation, and aggregation from multiple file formats including JSON, CSV & other compressed file formats.
- Good at writing complex SQL queries and aggregations.
- Strong understanding of OLAP/data warehousing concepts, dimensional models like star schemas, snowflake schemas.
- Hands-on experience working with streaming data using Flume, Kafka and other related Big Data tools.
- Should have hands-on expertise on designing and developing reliable and robust ETL pipelines.
- Scheduling and orchestration of data work flows using Airflow etc.
- Should have worked on NoSQL Databases like Mongo, HBase etc.
- Should have worked on at least one BI tool like Power BI, Tableau etc.
- Preferably should have worked on implementing CI/CD processes for deployments.
- Working knowledge of AWS technologies like Redshift, Kinesis, Lambda, RDS , S3, Glue, Athena , Dynamo DB would be an added advantage.

Apply For Job
Build Your Career at Umbrella
Stay Ahead in the Game
Our employee focused environment provides the right opportunities to grow as an individual, acquire skills in cutting-edge technologies and help our employees always stay ahead in the game.