Saturday, May 02, 2009

Keeping Abreast of Pornographic Research in Computer Science

[Thank to Kevin @ Free Democracy for this link] There are burgeoning numbers of Ph.D's and grad students who are choosing to study pornography. Techniques for the analysis of "objectionable images" are gaining increased attention (and grant money) from governments and research institutions around the world, as well as Google. But what, exactly, does computer science have to do with porn? In the name of academic persuit, let's roll up our sleeves and plunge deeply into this often hidden area that lies between the covers of top-shelf research journals.

Lena

One cannot do research in image processing without an encounter with Lena (pronounced Lenna). The image of the woman with a feathered hat has become the de-facto test image for many algorithms, and appears in thousands of articles and conference papers. And it is porn:

Alexander Sawchuk estimates that it was in June or July of 1973 when he, then an assistant professor of electrical engineering at the USC Signal and Image Processing Institute (SIPI), along with a graduate student and the SIPI lab manager, was hurriedly searching the lab for a good image to scan for a colleague's conference paper. They had tired of their stock of usual test images, dull stuff dating back to television standards work in the early 1960s. They wanted something glossy to ensure good output dynamic range, and they wanted a human face. Just then, somebody happened to walk in with a recent issue of Playboy.

The engineers tore away the top third of the centerfold so they could wrap it around the drum of their Muirhead wirephoto scanner, which they had outfitted with analog-to-digital converters (one each for the red, green, and blue channels) and a Hewlett Packard 2100 minicomputer. The Muirhead had a fixed resolution of 100 lines per inch and the engineers wanted a 512 x 512 image, so they limited the scan to the top 5.12 inches of the picture, effectively cropping it at the subject's shoulders.

The rest of the story (and the rest of Lena) can be found here. Indeed, the 70s marked the beginning of a long relationship between computer science and pornography. However, after the birth of the world wide web, things really got hot and heavy.

Finding Naked People

In the 1990s the world wide web began to explode, pumping information of all kinds into the homes of the technologically savvy at rates as high as 9600 bits per second. It was the time when search engines such as Webcrawler, Altavista, and Yahoo began the arduous task of spidering the scattered bits of information in Internet servers everywhere. The problem was that someone might search for a completely innocuous query such as the Trojan Room Coffee Pot, and come up with images that were unexpected and inappropriate, and depending on one's tastes, objectionable.

It's not likely to be on his business card, but David A. Forsyth is an expert in web pornography, having served on the NRC committee for this topic. It is evident from his web page that he has a sense of humour, which explains the superbly descriptive title for his 1996 paper, Finding Naked People. Forsyth was one of the first researchers to study the problem of identifying objectionable content.

One of Forsyth's research areas is tracking people in images and videos and figuring our their pose. In the general case, the system has to cope with the fact that people can wear clothes. It would be easier if the subjects all wore the same colour, or didn't wear anything at all. Finding Naked People describes a way of first masking out areas of skin. The areas are then grouped together into human figures (visualized by drawing a stick figure on the image). The crux of the paper is the grouping algorithm. The grouper knows rules such as how limbs fit together into a body, and the fact that a person cannot have more than two arms. Using the rules, it figures out how to superimpose a body onto the skin patches. If it can successfully do this, the image is probably a naked person. If it cannot, then it is probably something else, like a lamp.

Here is a visualization of the skin probability field from the paper, with the grouper output segments superimposed on top:

More probability masks can be found in Proceedings of the 4th European Conference on Computer Vision, volume II on page 598.

It's better with more than one

Finding Naked People piqued a lot of interest in the field of objectionable images, and the skin matching idea is now the first step in many algorithms. However, as James Ze Wang of Stanford notes, "it takes about 6 minutes on a workstation for the figure grouper in their algorithm to process a suspect image passed by the skin filter."

In their System for Screening Objectionable Images, Wang and his colleagues describe the WIPETM method for screening content. They use a wavelet edge detection algorithm to obtain the shape of the image. Edge detection transforms an image into the outlines of the object. Wavelet edge detection allows them to tune it to detect sharp or increasingly blurry edges until well-defined shapes appear.

Image moments allow one to treat any shape as a flat, physical object (like a plate). You can figure out the centre of gravity, axis of symmetry, and other properties that don't change when you move, rotate, or change the size of the object. This typically results in a set of 3 to 7 numbers that you can use to compare how similar shapes are. They were used in early OCR (optical character recognition) algorithms circa 1962.

Wang uses both edge detection and image moments in the analysis. His algorithm is different from modern ones, because an image must pass a series of five YES/NO tests. Future algorithms would combine the detectors using statistical ways and give a probability estimate.

  1. If the image is small, it is assumed to be an icon, and allowed. Icons (such as a mail envelope) were frequently used on the world wide web in the 1990s.
  2. If the image contains few continuous tones, it is considered to be a drawing and is allowed to pass.
  3. If a great portion of the colours of the image are human body colors, then the image is rejected as porn. The algorithm is pretty smart -- if a patch identified as skin has lots of edges in it, it is probably not really skin and is removed from the analysis. (This also counts as the texture matching step)
  4. Finally, the edge (outline) image is converted into 21 numbers representing the translation, scale, and rotation invariant moments. If the 21 numbers are to close to anything already in the database, the image is rejected.
Here are some examples where the algorithm fails. We have blurred them to protect the eyes of the gentle reader. For high resolution versions, you'll have to refer to  Proceedings of the 4th International Workshop on Interactive Distributed Multimedia Systems and Telecommunication Services on page 20 (the dog-eared one).

Getting a leg up on skin models

Skin detection is an important step in porn detection, but figuring out which colours represent skin is a hard problem. Colour depends on the lighting used in the photo, the ethnicity of the participants, and the quality and noise level. Michael J. Jones and James M. Rehg at Compaq studied the problem in detail. They first manually labeled hundreds of images, highlighting all the areas that were skin using a custom drawing application. Once you have billions of pixels that you know are skin, and billions that you know are not, you can easily classify them using introductory math:

The paper describes how to find the probability function, P, using a database of images painstakingly highlighted by an army of research interns. However, as a porn detector, the method needs work.

It will be obvious to anyone who has bought a digital camera recently how to improve this system. It was even obvious to Google.

Taking the ogle out of Google

In recent years, Google has had its hands full with the problem of pornographic imagery. Henry A. Rowley, Yushi Jing, and Shumeet Baluja at the Mountain View campus, have developed a system that combines skin detection with a number of different features. After applying face detection, they can deduce that the pixels around the face represent skin colour, and therefore find other skin pixels in the image. If the face is the majority of the image, as in a portrait, the image is safe. They use a colour histogram to detect artificial images such as screen shots. (so dirty cartoons are safe?).

Doing what only Google could, they must have set a record for the rate of pornographic analysis. They evaluated the speed of the algorithm on a corpus of around 1.5 billion thumbnail images of less than 150x150 pixels. "Processing the entire corpus took less than 8 hours," the team bragged, "using 2,500 computers."

Bags of visual words (Arm, leg, or . . .?)

In 2008, Thomas Deselaers et al. came up with a unique way of finding porn, from the world of artificial intelligence. Large news databases can automatically classify news articles based on the words in them. Articles containing the names of political figures or sports jargon can be easily categorized by machines, that don't need to really understand what the article is about. Techniques exist so that the machines can learn on their own which words or names are important. The same methods can be applied to images, using visual words.

To create the visual vocabulary, they extract image patches around "points of interest", parts of the image that are likely to contain features. They are then scaled to a common size, and analyzed using PCA to find commonalities. It is similar to face detection, but for things that aren't faces. It also takes colour into account in the analysis. Because colour is a part of the "vocabulary" already, skin detection is unnecessary.

Using this technique, Deselaers is even able to go beyond simple YES/NO classification and reach a new level of precision. The algorithm can rate images into one of five categories of increasing levels of offensiveness, from benign, to lightly dressed, to partly nude, fully nude, and porn. The paper contains examples from each category, and is guaranteed to offend somebody.

Corpus non indutus

At the end of their paper, Rowley, Jing, and Baluja (of Google) speculate on how to spur further advances:

...because of the ubiquity of the Internet, search engines, and the widespread proliferation of electronic images, adult-content detection is an important problem to address. To improve the rate of progress in this field it would be useful to establish a large fixed test set which can be used by both researchers and commercial ventures.

Yes, bring on the grant-sponsored porn, so that researchers can make the world a better place. But despite the years of study, one question remains unanswered: if such a corpus existed, how would we find it?


Comments:
Jeez! Do you suppose the Mona Lisa was considered porn in the days of Da Vinci?
Are you familiar with the work of Wilson Bryan Key? He wrote a couple of books, notably Subliminal Seduction, about subliminal images & messages in (mostly print) advertising.
PS - Check your email about Friday evening. I'll write something in a minute; I've gotta start dinner prep.
xo
 
Post a Comment

Subscribe to Post Comments [Atom]





<< Home

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]