Machine learning (ML) and artificial intelligence (AI) have the potential to enhance healthcare infrastructure and supply in LMICs. Concerns about algorithmic bias and unfairness need cautious application of machine learning and AI. Due to technological inexperience, preexisting cultural bias against minority groups, and a lack of legislative safeguards, LMIC societies are particularly susceptible to AI bias and fairness. To improve global health guidance, we must assess its appropriateness, fairness, and bias. Fairness involves 1) examining the impact on different demographic groups and selecting one of several mathematical definitions of group fairness; 2) addressing bias, the systematic tendency in a model to favor one demographic group over another that can be mitigated but can lead to unfairness; and 3) determining how the algorithm should be used in the local context and properly matching the machine learning model to the target population. Finally, we present a case study of machine learning's application in the diagnosis and screening for pulmonary diseases in Pune, India. We hope these approaches and ideas may aid others in their efforts to use machine learning and AI to global health.