Big Data Infrastructure Software Development Engineer
Austin , Texas , United States
Software and Services
Posted: Oct 4, 2021
Role Number: 200295463
Imagine what you could do here. At Apple, great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. We are the Big Data Engineering team that manages various state of the art open source technologies in Streaming, Data Science and Big Data Analytics areas, including Kafka, Hadoop, Spark, Kubernetes, Object Storage and AI/ML. A passion and love for building highly scalable, distributed web applications that efficiently deal with mass volumes of data is what we are looking for! Do you want your work to make a difference in the lives of millions of people who are passionate about Apple's products and services? We desire to find a highly motivated, detail-oriented, energetic individual with excellent written and oral skills who will not be afraid to think outside the box and question assumptions. In this role, you will be part of a fast growing, cohesive team with many exciting responsibilities related to Big Data.
- At least 5 years (preferably 8 years) of experience in a professional programming position
- Tried understanding and experience of core Java programming, performance, multi-threading, garbage collection
- Strong education in Computer Science, Software Engineering, Algorithms, Operating Systems, Networking, etc.
- Experience in handling architectural and design considerations such as performance, scalability, reusability and flexibility issues
- Sound knowledge of Linux, Systems/Application Design & Architecture
- Experience with Python and/or Go development highly desirable
- Experience with public clouds (GCP & AWS) highly desirable
- Passion in working with processing large amounts of data
- Sound experience in the deployment, design and architecture of Apache Kafka & Apache Zookeeper
- Experience in optimization and Tuning of Kafka brokers/clusters based on performance metrics
- Tried and true experience and knowledge in Big Data Technologies such as Hadoop, Spark, etc. is desirable
- Experience in setting up standard methodologies, standards, automation process for onboarding, monitoring and healing of Big Data
- Expertise in the lifecycle management of Kafka/Hadoop clusters including security patching, adding/removing brokers in a cluster, restarting brokers without disrupting the application
Setup of Kafka brokers, Kafka MirrorMakers and Kafka Zookeeper on hosts including a combination of bare metal systems, VMs and Containers. Setup of Hadoop clusters with related technologies Develop scalable, robust systems that will be highly adaptable to changing business needs. Define/develop Big Data tech, platforms and applications Architect, improve, and scale diverse applications to the next level. Work with application owners, developers and project managers. Recommend and deploy tools and processes to enable rapid development, deployment and operations in data solutions. Be a guru for application teams faced with architectural decisions or sophisticated technical problems, such as scaling and tuning.
Education & Experience
BS Computer Science or equivalent (MS Preferred)