The University of Technology Sydney (UTS) Human Technology Institute outlines a model law in a recent report for facial recognition technology to protect against harmful use and to help foster innovation.
Facial recognition and other remote biometric technologies are exploding, raising concerns about privacy, mass surveillance, and prejudice within and outside the technology.
This new report recognizes that our faces are unique because humans rely heavily on each other’s faces to identify and interact.
The report proposes a risk-based model law for facial recognition. The starting point should be to ensure that facial recognition is developed, upholding people’s fundamental human rights.
The model law sets out three levels of risk to human rights for individuals affected by a particular facial recognition technology application and risks to the broader community.
The report, Facial Recognition Technology: towards a model law, has been co-authored by Prof Nicholas Davis, Prof Edward Santow, and Lauren Perry of the Human Technology Institute, UTS.
To access the report, and additional background material, visit this page.