About the Role
You will join a lean, high-impact engineering team that builds clinically-validated medical devices used by doctors and healthcare workers in 40+ countries. Your code will directly power offline-capable AI diagnostic devices that screen for blindness-causing diseases (diabetic retinopathy, glaucoma, etc.) in remote and resource-constrained settings.
We move fast and value ownership over process. If you love solving hard problems at the intersection of mobile systems, computer vision, and healthcare, this role is for you.
Responsibilities
- Develop and maintain applications primarily for iOS, with opportunity to contribute on Android and Windows/React products.
- Work closely in an R&D environment—experiment, prototype, benchmark, and iterate.
- Contribute to real-time image processing, quality checks, and medical imaging workflows.
- Integrate ML models, optimize inference, and collaborate with AI teams.
- Follow development best practices, code reviews, and documentation standards.
Must Have
- Strong hands-on experience in Swift for iOS and MacOS development.
- Good understanding of Swift ecosystems, MVVM, UIKit/SwiftUI.
- Good knowledge of GCD, multithreading, performance tuning.
- Strong understanding of object-oriented programming.
- Experience with REST APIs, networking, and secure data handling.
- Experience with Git, branching strategies, and coding standards.
- Ability to work across multiple platforms (iOS + readiness to learn Android/React).
Good to Have
- Exposure to Android (Kotlin) or React (Windows-based apps).
- Familiarity with OpenCV, Vision Framework, or image processing concepts.
- Experience integrating AI/ML models (CoreML, TFLite, ONNX).
- Knowledge of unit/UI testing (XCTest).
- Experience - 2 to 4 years