Among the tools that Microsoft scrapped is one that determines a person's emotion from videos and photos.
Microsoft has discontinued public access to these features. But she continues to use it in her product called Seeing AI.
This application uses computer vision to describe the world for people with visual impairments.
"Tools like sentiment recognition can be valuable when used for a range of controlled accessibility scenarios," Microsoft said.
Limitations on creating AI voices
The Custom Neural Voice feature, which allows customers to create AI voices based on recordings of real people, also had similar limitations.
The company explained that the tool has great potential in education, accessibility and entertainment. But it is easy to imagine how they could be used to improperly impersonate speakers and deceive listeners.
Microsoft limits access to the feature to managing customers and partners, while ensuring the active participation of the speaker when creating an artificial voice.
The actions reflect efforts by major cloud providers to rein in sensitive technologies themselves, as lawmakers in the United States and Europe continue to assess sweeping legal boundaries.
Experts have criticized emotion recognition tools, saying it is unscientific to equate the external form of emotion with the internal one.
Facial expressions that are believed to be generic also vary across population groups.
Since at least 2021, Microsoft has been reviewing whether emotion-recognition systems are rooted in science.
This new decision by Microsoft is part of an overhaul of the company's policies on the ethics of artificial intelligence.
The company's updated responsible AI standards emphasize accountability for who uses its services and more human oversight over where these tools are applied.
Blocking some features of facial recognition services
This means that Microsoft limits access to some features of facial recognition services while removing others entirely.
Users must apply to use Azure Face for facial recognition while telling Microsoft how and where their systems are deployed.
Some of the less harmful use cases (such as automatically blurring faces in photos and videos) remain open.
The company is also working to terminate Azure Face's ability to identify traits such as gender, age, smile, facial hair, hair and makeup.