Genpact a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients.
Genpact inviting applications for the role of Software Engineer – Data Analyst.
Job Designation : Software Engineer – Data Analyst
Salary : 4 LPA – 6 LPA
Qualification : Bachelor’s degree
Experience : Freshers / 0 – 2 years
Skill Set :
-
- Strong understanding on Snowflake Architecture
- Fully well-versed with data warehousing concepts.
- Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing.
- Able to create the data pipeline for ETL/ELT
- Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements.
- Able to create the high level and low-level design document based on requirement.
- Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud.
- Awareness on data visualisation tools and methodologies
- Work independently on business problems and generate meaningful insights
- Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory.
- Should have experience on implementing Snowflake Best Practices
Job Description :
-
- Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc.
- Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data.
- Hands-on experience with Snowflake utilities such as SnowSQL, Bulk copy, Snowpipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight, streamlit
- Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system.
- Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF)
- Should have good experience in Python/Pyspark integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage.
- Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts.
- Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python and Pyspark.
- Should have some experience on Snowflake RBAC and data security.
- Should have good experience in implementing CDC or SCD type-2.
- In-depth understanding of Data Warehouse, ETL concepts
- Experience in requirement gathering, analysis, development, and deployment.
- Should Have experience building data ingestion pipeline
- Optimize and tune data pipelines for performance and scalability
Location : Hyderabad, India