Job Description
About the Role
We are a leading information and technology-enabled health services business, dedicated to modernizing the healthcare system and improving lives. Our innovative partnerships provide technology and tools that enable unprecedented collaboration and efficiency.
Main Responsibilities
* Data Platform Integration: Seamlessly integrate data platforms such as Databricks, Azure Machine Learning, and Snowflake into the software development process, leveraging their capabilities for enhanced data analytics and processing.
* CI/CD Pipeline Management: Create and maintain CI/CD pipelines using tools such as GitHub Actions or equivalent, streamlining the software delivery process.
* Infrastructure as Code: Implement and manage infrastructure as code, automating deployment processes for increased efficiency and consistency.
* Pipeline Optimization: Monitor and troubleshoot pipeline issues, proactively identifying and implementing solutions to enhance pipeline performance.
* Collaboration: Collaborate closely with development teams to seamlessly integrate pipeline processes with their workflows, ensuring a smooth development lifecycle.
* Cloud Resource Management: Optimize cloud resources on Azure, including compute, storage, and networking, to support the software applications efficiently and cost-effectively.
* Process Improvement: Continuously evaluate and enhance testing processes and tools to maintain the highest level of quality in delivered solutions.
* Technology Research: Stay current with emerging technologies and trends in the Dev Ops field, sharing relevant knowledge and insights with the team to drive innovation and improvement.
Requirements
* Linux Proficiency: Extensive experience in Linux system administration and support within enterprise IT or service provider environments.
* Dev Ops Expertise: Hands-on experience with IT operational automation, configuration management tools, and a strong understanding of Dev Ops practices, processes, and tools (e.g., Git, CI/CD, Terraform, Chef, Jenkins, etc.).
* Cloud Knowledge: Familiarity with public cloud platforms such as AWS, Azure, or GCP, and a solid understanding of container orchestration platforms like Kubernetes.
* Scripting Skills: Proficiency in scripting languages such as Bash, Python, and Terraform.
* Communication: Excellent communication skills, both written and verbal, in English are essential for effective collaboration and documentation.
* Self-Motivated: Self-motivated, resourceful, creative, innovative, and results-driven with strong problem-solving and analytical capabilities.
* Agile Experience: Experience in Agile development methodologies and a history of successful collaboration within cross-functional teams.
Preferred Qualifications
* Containerization Proficiency: Experience in building containers using Dockerfiles, and familiarity with container build tools like Bazel.
* Kubernetes Deployment: Working knowledge of deploying applications within a Kubernetes cluster using Helm, streamlining the deployment process.
* Cloud Administration: Practical experience in administering applications within a public cloud environment, with a preference for Azure expertise.
* Monitoring and Logging Tools: Familiarity with monitoring and logging tools such as Prometheus, Splunk, Grafana, Elasticsearch (ES), Kibana, etc., to enhance observability and troubleshooting.
* Dev Ops Expertise: Proven experience in software development and delivery using Dev Ops practices, further contributing to our culture of efficiency and automation.
* Data Science tooling exposure such as Spark, R Studio, Jupyter, PyTorch, SQL query exposure.