Creating a Dataset of Faces for my Autoencoder with Semi-supervised LearningBy Eric Antoine Scuccimarra
I am still working on my face autoencoder in my spare time, although I have much less spare time lately. My non-variational autoencoder works great - it can very accurately reconstruct any face in my dataset of 400,000 faces, but it doesn't work at all for interpolation or anything like that. So I have also been trying to train a variational autoencoder, but it has a lot more difficulty learning.
For a face which is roughly centered and looking in the general direction of the camera it can do a somewhat decent job, but if the picture is off in any way - there is another face off to the side, there is something blocking the face, the face is at a strange angle, etc it does a pretty bad job. And since I want to try to use this for interpolation training it on these bad faces doesn't really help anything.
One of the biggest datasets I am using is this one from ETHZ. The dataset was created to train a network to predict the age of the person, and while the images are all of good quality it does include many images that have some of the issues I mentioned above, as well as pictures that are not faces at all - like drawings or cartoons. Other datasets I am using consist entirely of properly cropped faces as I described above, but this dataset is almost 200k images, so omitting it completely significantly reduces the size of my training data.
The other day I decided I needed to improve the quality of my training dataset if I ever want to get this variational autoencoder properly trained, and to do that I need to filter out the bad images from the ETHZ IMDB dataset. They had already created the dataset using face detectors, but I want to remove faces that have certain attributes:
- Multiple faces or parts of faces in the image
- Images with something blocking part of the face
- Images where the faces are not generally facing forward, such as profiles
I started trying to curate them manually, but after going through 500 images of the 200k I realized that would not be feasible. It would be easy to train a neural network to classify the faces, but that would require training data, but that still means manually classifying the faces. So, what I did is I took another dataset of faces that were all good and added about 700 bad faces from the IMDB dataset for a total size of about 7000 images and made a new dataset. Then I took a pre-trained discriminator I had previously used as part of a GAN to try to generate faces and retrained it to classify the faces as good or bad.
I ran this for about 10 epochs, until it was achieving very good accuracy, and then I used it to evaluate the IMDB dataset. Any image which it gave a less than 0.03 probability of being good I moved into the bad training dataset, and any images which it gave a 0.99 probability of being good I moved to the good training dataset. Then I continued training it and so on and so on.
This is called weak supervision or semi-supervised learning, and it works a lot better than I thought it would. After training for a few hours, the images which are moved all seem to be correctly classified, and after each iteration the size of the training dataset grows to allow the network to continue learning. Since I only move images which have very high or very low probabilities, the risk of a misclassification should be relatively low, and I expect to be able to completely sort the IMDB dataset by the end of tomorrow, maybe even sooner. What would have taken weeks or longer to do manually has been reduced to days thanks to transfer learning and weak supervision!
Labels: coding, data_science, machine_learning, pytorch, autoencoders
CommentsPlease login to comment
Aug 22, 2019 at 04:58 pm
Под термином «профиль» в отношении пластиковых окон и дверей подразумевается материал, из которого созданы рамы и створки. Для вертикальных и горизонтальных элементов в окнах ПВХ используется сложная конструкция из металла (алюминия или стали) и поливинилхлорида. В поперечном сечении она представляет несколько продольных полостей, разделенных герметичными стенками. Такие полости называются камерами, чем их больше, тем, теоретически, лучше — окна становятся теплее и прочнее. 1. Уплотнитель. 2. П-образное армирование. 3. Штапик с уплотнителем. 4. Камеры с профилем. 5. Уплотнитель. 6. Квадратное армирование. Еще мы прелдагаем <a href=https://vk.com/public182308832>купить пластиковые окна пвх цена</a>