NetApp Recruitment Drive 2025 | Netapp Hiring Software Engineer

www.djobbuzz.com 27 Apr 2025
Company Name
NetApp
Company Website
http://netapp.com/
Experience
1-3 years
Job Role
Software Engineer
Job Type
  • Experienced
  • Fresher
Job Location
  • Bengaluru/Bangalore
Skills
  • NoSQL
  • Apache Spark
  • SQL
Education
  • BE/BTech
Branch
  • CS
  • IT
Job will expire on
26 Jun 2025

About Company

  • NetApp is the intelligent data infrastructure company, turning a world of disruption into opportunity for every customer.
  • No matter the data type, workload or environment, we help our customers identify and realize new business possibilities. And it all starts with our people.
  • If this sounds like something you want to be part of, NetApp is the place for you.
  • You can help bring new ideas to life, approaching each challenge with fresh eyes. Of course, you won't be doing it alone.
  • At NetApp, we're all about asking for help when we need it, collaborating with others, and partnering across the organization - and beyond.

Job Overview

  • As a SDE at NetApp India R&D division, you will be responsible for development, Validation, implementation, and Operations of software across Big Data Engineering across both cloud and Onprem. You will be part of a highly skilled technical team named NetApp Active IQ.
  • Active IQ Platform/Datahub process 10 trillion data points per month with around 25 PBs of data in its data sources. This platform enables advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage and then provides the insights and actions to make it happen. We call this “actionable intelligence” and it leads to higher availability, improved security, and simplified administration
  • Your focus area will be around Data engineering related projects as a Data Engineer is responsible for development and operations of the microservices in Active IQ’s Big data platform.
  • This position requires an individual to be creative, team-oriented, technology savvy, driven to produce results and demonstrates the ability to working across teams

Eligibility Criteria

  • 1 to 3 years of Experience with Java, and Python to write data pipelines and data processing layers.
  • Strong in CS fundamentals, Unix shell scripting and Database Concepts
  • Working expertise with Data processing pipeline implementation, Kafka, Spark, NOSQL DB's especially MongoDB (Cassandra, TSDB preferred) and SQL
  • Familiarity with GenAI, Agile concepts, Continuous Integration and Continuous Delivery
  • Experience in Linux Environment with containers (Docker & Kubernetes) is a plus

Job Description

  • Build big data platform and Big Data solutions primarily based on open-source technologies that is fault-tolerant & scalable.
  • Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community.
  • Identify right open-source tools to deliver product features by performing research, POC/Pilot and/or interacting with various open-source forums
  • Deploy and monitor products on both Cloud and Onprem platforms
  • Work on technologies related to NoSQL, SQL and InMemory platform(s)
  • Develop and implement best-in-class monitoring processes to enable data applications meet SLA