This research is a quantitative content analysis of 16 FRT providers' websites, documents, social media posts, and news coverage from 2018 to April 2023. The aim is to examine the level of companies' engagement and commitment to establishing algorithmic fairness. I focus on companies in the OECD member states that offered, sold, or still sell facial recognition to law enforcement agencies to underline the sensitivity of such usage and stress the obligation of providers to eliminate the risk of bias. Additionally, such use is one of the most scandalous implementations of AI. The companies from the sample are all based in democratic states, representing an obligatory consensus over human rights. Also, the OECD is the first organization to coordinate and establish an intergovernmental policy on AI. This study investigates one of the AI Ethics principles, fairness, in the facial recognition industry. The results of the quantitative content analysis show that most FRT providers from the sample act to either mitigate or eliminate the risk of bias, illustrating by this conduct an ethical commitment to establishing algorithmic fairness either via a tool, hiring a team, adopting a policy, or stopping sales of this technology. Their conduct balances their profit-driven motives and citizens' interests, especially those companies that stopped selling FRT to law enforcement agencies, eliminating by this act the risk of bias in such use. However, these providers do not assume responsibility for the systemic racism or sexism that occurred. They blame law enforcement agencies for misuse and ask for more regulations. A third of the sample accepts and transfers the risk by not using any instrument to enable Fair AI, illustrating a preference for profit and growth over human rights of equity. The majority of this group communicates about their engagement in the cause of Fair AI but does not implement any tool to mitigate the risk, using the cause as an immoral marketing play.