KPMG Recruitment Drive 2025 | KPMG Hiring Software Engineer

www.djobbuzz.com 06 Apr 2025
Company Name
KPMG
Company Website
https://kpmg.com/xx/en.html
Job Role
Data Enginner
Job Type
  • Fresher
Job Location
  • Bengaluru/Bangalore
Skills
  • Java
  • Scala
  • Python
  • SQL
Education
  • BE/BTech
Branch
  • IT
  • CS
Job will expire on
05 Jun 2025

About Company

  • KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993.
  • Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition.
  • KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. 
  • KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment.

Job Overview

  • You have been involved assembling large, complex structured and unstructured datasets that meet functional/non-functional business requirements.
  • Experience of working with cloud data platform and services.
  • Conduct code reviews, maintain code quality, and ensure best practices are followed.
  • Debug and upgrade existing systems.
  • Nice to have some knowledge in Devops

Eligibility Criteria

  • Bachelor’s degree in computer science or related field
  • Experience in Snowflake and Knowledge in transforming data using Data build tool.
  • Strong programming skills in Python, Pyspark and some combination Java, Scala (good to have)
  • Experience in AWS and API Integration in general with knowledge of data warehousing concepts.
  • Excellent communication and team collaboration skills

Job Description

  • Should have experience in Data and Analytics and overseen end-to-end implementation of data pipelines on cloud-based data platforms.
  • Strong programming skills in Python, Pyspark and some combination Java, Scala (good to have)
  • Experience writing SQL, Structuring data, and data storage practices.
  • Experience in Pyspark for Data Processing and transformation.
  • Experience building stream-processing applications (Spark steaming, Apache-Flink, Kafka, etc.)
  • Maintaining and developing CI/CD pipelines based on Gitlab.