AI/ML - Speech Systems Evaluation Engineer, Siri
Santa Clara Valley (Cupertino) , California , United States
Machine Learning and AI
Posted: Sep 30, 2021
Weekly Hours: 40
Role Number: 200293048
Home Office: Yes
Are you passionate about innovation that matters? Would you like to play a part in shipping groundbreaking technology for large scale systems, natural language, and artificial intelligence? You will be part of the team that builds tools and frameworks to evaluate the Speech features in various AI/ML products, amongst which is the most widely used intelligent assistant on the planet - Siri. Join the Speech MLSEE team at Apple.
- 6+ years of professional work experience in development or quality engineering.
- 2+ years developing test automation and frameworks (Java, Python, ObjC, Swift).
- Self-motivated and dedicated with proven creative and critical thinking capabilities.
- Ability to juggle multiple projects and flexibility to respond and react to a dynamic software environment.
- Strong teamwork skills and ability to work in a large multi-functional teams environment.
- Experience with automation systems and system integration testing at scale.
- Experience writing detailed test plans and automation designs.
- Experience with testing and root cause analysis of multi-tiered, client-server architecture stacks.
- Familiarity with iOS, macOS, shell scripting, and terminal.
- Knowledge of statistics based testing approaches a plus.
- Machine learning and neural networks experience a plus.
- Familiarity with any commonly used CI/CD framework (Teamcity, Jenkins etc) a plus
- Experience with audio and speech based products testing and development a plus.
- Excellent written and verbal communication.
We are seeking an experienced software engineer to drive speech quality evaluation. You will be working closely with AI/ML engineers and data scientists across the org, and developing frameworks and tools to evaluate Speech across the entire swathe of AI/ML products, including Siri. You will also be a key member in defining the evaluation criteria for feature quality and release across all the platforms that Speech is running on. Daily work involves understanding customer usage, being part of feature design development, creating test plans for testing AI/ML products at scale, building automation tools, services and test frameworks, and reporting to key stake holders. You are an innovative, organized self-starter with excellent interpersonal skills. You have keen attention to detail and are able to work under tight deadlines. You work well with peers inside and outside the AI/ML org to ensure high quality AI/ML product releases.
Education & Experience
B.S. or M.S. in a technical field required