It's mostly because mistakes in a face are more noticeable than mistakes elsewhere. The creator of Craiyon (formerly known as Dall-E mini) explains it here: https://www.sciencefocus.com/news/dall-e-mini-creator-explains-blurred-faces-going-viral-and-the-future-of-the-project/
The same goes for AI upscaling algorithms too. There are general purpose AI upscaling algorithms that can upscale any kind of content: landscapes, buildings, etc. To human perception, the faces upscaled with these algorithms look horrible most of the time. The only AI upscalers that do well with faces have specialised algorithms and training data that exclusively deals with faces.