Bias in AI-powered biometric algorithms can lead to unfair treatment of individuals in certain demographic groups based on their gender, age, and race. As with matching algorithms, bias in liveness detection can also lead to higher error rates and less accessibility for mobile banking services for individuals based on their appearance. This white paper introduces methodologies to address bias in support of Responsible AI principles.