Gaussian Blur

(Long read)

Recently my friend sent me a selfie from his new Samsung Galaxy phone. At first I didn’t think much about it, but after another glance I saw that his freckles were all gone. It seemed almost like he was wearing make-up, which is quite unusual for him. But his look was neither the result of make-up nor a magically changed skin, but simply because his new phone automatically enabled a feature called “beautification mode” which smoothes the skin, and therefore almost entirely erased his freckles.

As an artist and designer working in digital media I am very familiar with tools like the beautification mode on Samsung phones. I do not use that particular phone or filter, but the tools I use basically work the same way: digital image editing softwares with an ever-expanding set of digital image filters. Anybody with access to a smartphone or computer has probably had the pleasure – knowingly or unknowingly – of digitally editing images: swiping through filters on Instagram before posting a story, snapping selfies with beautification mode on, adding digital facemasks on Snapchat or Facetime, editing photos in VSCO, and countless other techniques, apps and platforms.

This text is an attempt at tracing some of the roots of beautification through digital image processing. Specifically the text is dealing with the filter called Gaussian blur, one of the most ubiquitous digital image filters today, and a filter which a lot of other filters build upon and use. It is one of those few filters that I use every day in my own work. Who or what does “Gaussian” refer to? How is it made? How are filters in general even made? What is the history of Gaussian blur, and digital filters in general?

 

The shortest version of the story is this: Gauss in “Gaussian” refers to the 18th century German mathematician Carl Friedrich Gauss, who proposed the Gaussian distribution curve, also known as the normal distribution or the bell curve, named for its recognizable bell-shaped curve, and the Gaussian Blur filter uses this Gaussian distribution:

Gaussian curveThe Gaussian curve aka the normal distribution

Gaussian blur on halftone printA halftone print rendered smooth through Gaussian blur

The deeper story of the Gaussian Blur filter is longer and more complicated: This version of the story suggests that social sciences, politics and picture processing are inseparable. It shows how the Gaussian curve has been used as a beautifying filter in the processing of photos, as well as a kind of “social beauty”-filter which reduces “unwanted” elements in society such as criminals, disabled and other types of minoritized people.

The Gaussian curve was initially empirically based on astronomical observations. When it was described by Gauss in 1809 it was intended as an attempt to determine accurate astronomical measurements from the distribution of random errors around a central average value. Or more simply: it was intended to find the most correct position of planets by finding the average of many measurements. The curve came from plotting the errors in these observations. The filter Gaussian Blur uses a Gaussian curve to “average” the blur in digital picture processing. This can best be described by comparing it to other types of blur-filters, for instance a “box blur”. In a box blur, the amount with which a pixel is blurred is equal to the average of the rest of the pixels that make up the picture. In a Gaussian blur, the blurring is weighted according to the Gaussian distribution. The final (post-filtered) picture is therefore blurred in such a way that sharper edges and lines stay relatively sharp compared to the rest of the blurred picture. It is a way to blur a picture while still keeping the most important contours, lines and edges clear and distinct.

The point of this text is not to determine whether the gaussian curve in itself is “good” or “bad” which does not really make sense, but instead to explore the ways it has been used throughout history. First part of this text concerns the history of digital pictures and picture processing, and the second part concerns the simultaneous history of social statistics, averageness and eugenics.

A brief history of picture processing

A digital picture is a quantified picture: A picture turned into a set of data. Data records with levels of intensity and color. A digital picture consists of a two-dimensional plane with a finite set of elements, each with a particular location and value. This element is usually called a pixel. A pixel can only be one color. This color is created from either an intensity or brightness value (in grayscale-imagery) or a combination of three or four colors (RGB or CMYK).

As early as the 1920’s pictures were digitized in order to be transmitted across the Atlantic for use in the newspaper industry using the Bartlane system. The Bartlane system consisted of multiple punch cards made from multiple tint plates that together would make up the complete picture. These plates were encoded on tape and transmitted via telegraph. On the other end, a telegraphic printer would reverse the process. Initially the pictures were encoded with five levels of gray, which was increased to 15 levels of gray in 1929.

The first processing of digital pictures in a computer took place in the 1960’s. An early application of digital picture processing was done for the American space program as a way to correct for various types of picture distortion in photographs taken aboard space crafts. NASA created digital picture processing methods in order to make legible the pictures taken from telescopes and aboard spaceflight.

One of the most important adjustments in early digital picture processing was the reduction of noise. The method of superimposing pictures on top of each other has been used since the dawn of photography, as will be shown later, but in the 1960’s it was proposed as a method to reduce noise in early digital images: so-called noise reduction by image averaging. Other early research was done at the Jet Propulsion Laboratory in the 1960’s where they used the digital technique of convolution (more on this later) in order to reduce noise and grain in the pictures that they received from spacecrafts.

Processing of digital images was at first only attempted in large laboratories and universities, simply because it required computing powers beyond accessible for the average citizen or even the average business. But not long after the earliest attempts it would become commonplace in print magazines and portrait photography. Noise reduction by smoothing, ie. blurring, became the preferred tool for the beautification process in portrait photography: Reducing the sharpness of fine skin lines and small blemishes in order to create a more soft and pleasing final picture.

And a brief aside: It is important here to distinguish between retouching images and using digital image filters. Superficially there are similarities, and filters are often used when retouching images. The most important distinction for the purposes of this text is that using filters are electronic and digital methods – which can therefore be broadly applied -, whereas retouching is manual labor – literally done by hand, picture by picture. Retouching is an artform, something that skilled photographers learn and excel at which has existed since the birth of photography, and can be done both in digital and analog pictures. Digital image filters are electronic processes which originated in the 1960’s and are still around in more or less the same form. Filtering always works the same way, whereas retouching is selective and manual.

It is likewise important to note that obviously painters, sculptors and their patrons throughout human history have had a huge influence on beauty ideals. The human body and face is perhaps the oldest motif in art. And while art and science have never really been separate entities – one of history’s most famous portraits, the Mona Lisa, was painted by artist/scientist Da Vinci – this text only concerns the ubiquity of mechanical and digital portraiture, which didn’t arrive until photography became commonplace in the later part of the 19th century, and digitized pictures in the mid-20th century.

Kernel convolution

The earliest types of digital picture processing in computers use a method which is called kernel convolution. This method is still the underlying method in a lot of simple digital picture processing filters.

Kernel convolution is the process of adding each element of the picture to its local neighbors weighted by the kernel. Each pixel in the picture has a specific value. This value determines the color and intensity (brightness) of the pixel. When using a kernel convolution this value is changed so that it becomes the sum of the averages of the pixel itself and its surrounding pixel values.

Kernel convolution uses a small matrix. This matrix can be of any size which changes the final output. The values in the kernel are different depending on what the desired filter is: If it is intended to blur, the values in the kernel matrix can be those of a Gaussian curve distribution.

The values in the matrix are passed over each pixel of the whole picture, calculating new values for all pixels, transforming it. A new picture is created. Anybody in possession of an Adobe license can try this themselves: In Photoshop, go to Filter → Other → Custom. This brings up a grid in which the user can plot in numbers, which then becomes the weighted averages in the kernel convolution. With this, anybody can make their own unsharp, blur, edge detection, or any kind of classic digital image filter, just like NASA scientists in the 1960’s.

In a Gaussian blur the priority is on the pixel of interest and those closest to it. The pixels further away in the grid surrounding the kernel pixel of interest are weighted less in the averaging, thereby having less of an impact on the new calculated pixel. Practically this means that a picture blurred using a Gaussian curve keeps edges and contrasts more visible than if you were to blur equally over the entire picture. The Gaussian Blur smoothes the noise while keeping the distinctive overall lines and edges.

Blurring

As described in the previous sections, the Gaussian curve has been used in digital picture processing for decades. But use of the Gaussian curve is not restricted only to picture processing. The remainder of this text is primarily concerned with how the curve that is used to beautify pictures is also used in other areas of society, namely social sciences, statistics, criminology and the definition of what is normal.

The Gaussian curve is also known as the normal distribution. It is a common continuous probability distribution. The graph of a Gaussian curve is the characteristic bell curved shape – a symmetrical shape with a smoothly curved top and long tails in each end.

The normal distribution is used as a model of how large numbers of statistical elements are distributed around the average value. Example: In a large group of randomly selected people most will fall close to the average height at the top of the curve, while fewer are taller and shorter around the top of the curve, and very few are either much taller and much shorter than the average. Their heights are distributed symmetrically around the average height, equally on both sides, taller and shorter than the average.

The normal distribution is empirically based on astronomical observations. When described by Gauss in 1809 it was intended as an attempt to determine accurate astronomical measurements from the distribution of random errors around a central average value. In this view, all values except the exact average are errors.

The normal distribution occurs so frequently in empirical observations that there is a tendency to apply normal distributions even to situations where it may not be applicable. The physicist Gabriel Lippmann remarked: “Everybody believes in the exponential law of errors (the normal distribution): the experimenters because they think it can be proved by mathematics, and the mathematicians because they believe it has been established by observation.”

The average man

While Gauss intended his normal distribution curve as a statistical tool mainly to do with astronomical observations, other scientists later found it useful in other ways.

The Belgian statistician Adolphe Quetelet introduced the conceptual category of the average man in social statistics in the 1830’s. The concept was integral to his proposal to create a new social mathematics: A science that would be able to mathematically determine laws of all social phenomena. Quetelet observed that large aggregates of social data, and in particular anthropometric data, falls into a pattern corresponding to the bell-shaped curve of Gauss’s normal distribution. Quetelet used data from chest measurements of Scotch soldiers in order to show his concept of the average man.

The average man constituted the normal value – the mean value on top of the bell curve – against which the anthropometric data of all others is measured.

Quetelet’s social physics defined the social norm of the average man as the “center of gravity”. The average man was the beautiful, smooth ideal. All errors in society, such as criminals or other “non-averages” were considered noise that disturbed the peace and harmony of the ideal balance represented by the average man.

In Quetelet’s view, the average man constituted an ideal of not only societal health but also of stability and beauty.

Physiognomy

To understand the context in which Quetelet created his idea of The Average Man, you need to know a bit about physiognomy: the pseudo-scientific discipline of assessing the character or personality based on a person’s outer appearance, especially linked to the head and face.

Physiognomy is a tradition with roots in ancient Greece, but as a pseudo-scientific discipline it has consistently shown itself throughout history and different cultures. For brevity’s sake I will jump directly to the 19th century. In the 19th century, physiognomy was directly linked to Darwin’s theory of evolution. The proponents of physiognomy in this period took Darwin’s studies of similarities in various expressions in the faces of animals and humans as evidence for a connection between traits in humans and animals: There must be similarities in the behavior of a certain animal and that of a person who looks like that animal.

Although not as famous as his half-cousin Charles Darwin, Sir Francis Galton (1822-1911) is still today a well-known Victorian polymath. He was one of the first to define and write about fingerprint identification. He was also the creator of a “Beauty-Map” of the British isles – a praxis where he personally classified (what he determined was) beautiful women in the different areas of the UK in order to find the most beautiful parts of the country.

One of Galton’s many missions was to define types of people, and with a firm belief in physiognomy, he took to photography to scientifically prove his point. Using photographs as scientific evidence was at the time a relatively new and unusual method, but it had been used by Darwin in his 1872-book The Expression of the Emotions in Man and Animals. A few years later, Francis Galton published the first of his composite-photographs of various types of people in 1878.

Composite portraiture is made by superimposing multiple pictures of different people on top of each other, in order to make a kind of blended photograph. Galton used it as a means to make a photograph of an “average” member of a group. This was intended to describe the average person of a group of people with some similarities, such as profession, diseases or criminal backgrounds. The photographs were combined through repeated limited exposure to produce a single blended picture. This portrays a fictitious figure with the supposed average characteristics of the group. In popular science today composite pictures are still produced, for example when illustrating an ideal face.

Galton noticed that the averaged person in the composite photograph often appeared more attractive than any individual member of the group. Technically this is due to the fact that the noise of the skin in the individuals is blurred in the final composite picture, which is a kind of noise reduction by smoothing similar to what happened in early digital picture processing as described previously. The average composited face appears more symmetrical and without individually identifying skin blemishes, simply because it is a layering of multiple people and therefore individual marks disappear. It is obvious that Galton’s idea of beauty is consistent with the enduring beauty ideal of a symmetrical face with clean skin.


Galton, Francis: Composite photographs of the members of a family, c. 1878-82

Time Magazine composite photographTIME Magazine November 18, 1993

As part of his inquiries into averages and types of people, Galton was interested in the analysis of family connections between a select number of “eminent men” in order to trace the  roots of extraordinary talent. His conclusion was that distinguished qualities in man are heritable. He coined the term Eugenics in 1883 from the Greek word for well-born.

Eugenics

Galton was interested in selective breeding of eminent and talented men in order to enhance society over time. In his quest to find these “eminent men” and thereby enhance society Galton had various ideas which he tested along the way, and which eventually lead to his Eugenics theory: from his beauty map of the British Isles, to the quality of the Gaussian curve observed in the Average Man, and to his analog image processing by layering multiple images in order to create attractive composite portraits with smooth skin and symmetrical faces.

Galton was a proponent of so-called positive eugenics: selective breeding through the encouragement of mating between persons with admirable qualities in order to breed an extraordinarily gifted race. The mating in Galton’s Utopia was to be encouraged by financial rewards from the state. Persons with the highest talent were not only considered more intelligent and able to live longer but also believed to be genetically primed as more beautiful than the average.

The concept of Eugenics spread quickly through Europe and the United States in the late 19th and early 20th century. In the early 1900’s, the Eugenics Education Society in the UK, with Galton as its first honorary president, proposed measures to reduce birth rates in the lowest classes of society in order to gradually eliminate the poor.

The idea of genetic management rose in the United States in the early 20th century, where it was also linked to immigration. In 1917 a government statute explicitly excluded “all idiots, imbeciles, feebleminded persons, epileptics [and] insane persons”. Purportedly inferior humans from Eastern and Southern Europe were excluded from migrating to the US with the passing of the 1924 Immigration Restriction Act. This legislation was passed as a result of testimony from eugenicists.

Negative eugenics was another method of implementation of eugenics: population control through forced sterilisation. The intention of forced sterilisation was to keep the parts of the population that were deemed “worthless” down, by forcing them to sterilize. In the first seven decades of the 20th century, Eugenic policies affected up to 64000 Americans, primarily through measures such as forced sterilization and castration.

Eugenics is today primarily associated with Nazi Germany, where it is known under the synonymous term racial hygiene. Under Nazi eugenics policies, large groups of people were deemed “life unworthy of life”, including “drunkards, sexual criminals, lunatics, and those suffering from an incurable disease which would be passed on to their offspring…”. Even before the beginning of the Second World War the Nazi regime had already compulsorily sterilized up to 350.000 people who were deemed mentally and physically “unfit”. More than 70.000 were killed in the involuntary euthanasia program meant for persons deemed incurably sick to be granted a “mercy killing”.

Newer developments

Usage of the Gaussian curve and digital picture processing are both obviously still around, and neither ended with Galton nor NASA. The 1994 book The Bell Curve by psychologist Richard J. Herrnstein and political scientist Charles Murray was already highly controversial at the time of publication, since it argued that racialized people were less intelligent. The bell curve is another name for the Gaussian curve.

Another example of a relatively recent attempt at defining the “normal” body is the Visible Human Project, which started in 1993 – a project that uses computed axial tomography (CAT) and magnetic resonance imaging (MRI) to create super-detailed scans of the entire “male” and the “female” body. These detailed maps of entire human bodies are used by institutions and researchers as a basis for understanding human bodies. The male in this instance is a 39-year old white male named Joseph Jernigan, a convicted murderer who was executed and who donated his body for scientific or medical purposes. Turning a convicted and incarcerated person into statistics and data points, as done in this project, is methodically similar to Galton’s use of mugshots of criminals a century earlier. The mugshot was not only a practical invention in order to index criminals but also a means to gather statistical data on large numbers of people, an early type of big data, in order to create types, standards and a definition of the normal, beautiful body.

Physiognomy is also alive and well today: In a research paper from 2016, the two researchers Xiaolin Wu and Xi Zhang from Shanghai Jiao Tong University trained a neural network on a set of regular Chinese ID photos and ID photos of convicted criminals in order to create a software which could determine the difference between criminal and non-criminal faces. The researchers claimed that their software was successful in this endeavour.

Contemporary computer vision is also unsurprisingly biased. Historically marginalized people in the Western world – women, workers, people with darker skin – tend to be illegible by computer vision, due to the fact that these technologies are calibrated to bodies assumed to be “normal”, which means young, white and male. This is also directly in line with how photographic film historically has been calibrated to capture light-colored skin. This creates a variety of problems: from the practical problem of being illegible by for example automatic passport scanners in the airport, to a problem of non-representation and suppression of marginalized bodies in culture, to the broader issue of being considered not-normal by large tech companies who hold an outsized importance in our societies. Even though it may seem on the surface like a good thing to be invisible to computer vision, as a sort of way to escape the surveillance society, when a photographic machine determines your status and fate, being illegible is even worse than being determined not-normal.

Multiple photographers and theorists have showed that photography from its beginning has been used to classify, minoritize and penalize people. It is no surprise that this history continues to this day in our image-saturated societies with ubiquitous surveillance, selfies, computer vision, artificial intelligence etc. The manipulation of pictures is also nothing new – it is as old as the medium itself – and we know that the image itself is no truth-teller. It is rather when we look at the techniques of picture-manipulation we get the clearest vision of our image-society’s ideology: all-encompassing beautification through noise reduction.

The beautification process that happens millions of times daily on social media is not a silly, fun or even neutral pastime; It is part of a broader process of beautifying the individual as well as society at large, through the filtering of unwanted “noise” – pimples on our faces as well as minoritized people in our societies. In a society of data-driven processing of pictures and humans we have no other options than to be a standardized and ubiquitous version of beautiful.

References

  • Bridle, James, New Dark Age, Verso, 2019
  • Bulmer, Michael, Francis Galton: Pioneer of Heredity and Biometry, Johns Hopkins University Press, 2003
  • Darwin, Charles, The Expression of the Emotions in Man and Animals, John Murray, 1872
  • Galton, Francis, Inquiries in Human Faculty and its Development, Macmillan, 1883
  • Gonzales, Rafael C., and Richard E. Woods, Digital Image Processing, 2nd ed. 1992, Prentice-Hall 1992
  • MacKellar, Calum, The Ethics of the New Eugenics, Berghahn Books, 2014
  • Malin, David, and Paul Murdin, Colours of the Stars, Cambridge University Press, 1984
  • Sekula, Allan, “The Body and the Archive”, October 39, Winter 1986
  • Stranneby, Dag and William Walker, Digital Signal Processing and Applications, Newnes/Elsevier, 2004
  • Wells, Liz, ed., Photography: a Critical Introduction, Taylor & Francis Group, 2015
  • Whittaker, E.T. and G. Robinson, The Calculus of Observation, Blackie and Son Limited, 1924

 

A-B list on scenic design education

In the spring of 2019 I was asked as an associate professor at the New School in NYC to comment on the education of scenic designers. In response, I created an A-B list (with inspiration from my former professors Tony Dunne and Fiona Raby’s A-B Manifesto (2009)), where A is the status quo or current situation, and B is my thoughts and wishes for a rethinking or refocusing of the subject.

A → B

Disciplines → Responsibilities

The traditional approach to a design process for a performance is to have a director leading multiple pre-established departments of disciplines (sound designer, set designer, video designer, etc.) The director is responsible for “merging” distinct artistic disciplines into a coherent design – so in a sense, the director is the overall performance-designer in the more traditional process.

In contemporary experimental work we often hear of inter-disciplinary, trans-disciplinary or anti-disciplinary processes, which try to push those traditional roles. My approach is somewhere in between: To acknowledge how disciplines can be useful (productive) as a way to establish areas of responsibilities, and then (try) to forget them. This means that as a designer I am not merely concerned with my particular design in a project, and I will engage just as deeply with the sound, music, text, movement and all the other aspects that make up the performance.

Obviously as a designer I do not magically turn into a performer or a musician overnight – even though I engage fully and equally in the development of these aspects as well. This is where it is useful to think of not disciplines but responsibilities: By the end of the day I am the one who is responsible for delivering diagrams, technical specs, and the final content of the video.

Text → Concept

Traditionally, the text is the primary departure point for a performance, and everyone in the design process is informed by the text as the focal point of the performance. But the hierarchy should be flattened, and the text is only part of the performance, and will influence it accordingly.

This obviously applies to the creation of entirely new (devised) work, but also applies even when working with a more classic text work. Designing for a Mozart or a Shakespeare work does not mean that you as a designer should focus on the text (or musical score, libretto etc.) and design from that. Rather, the artistic team should create a concept (of course, in that case, from serious considerations, analysis etc. of the original text as well as other sources and processes), and this concept becomes the focus point for the design process.

Rehearsal → Rendering

The word rehearsal literally means “repeat aloud”, but in contemporary performance the spoken text is equal to all other elements, and therefore the word rehearsal is already, in a way, insufficient. The word itself also has teleological connotations: as if we already know what the final product must be (which we do not).

Instead I suggest: rendering. Every time the work is performed it is a rendering. Every time it is rendered it exists in a real way equal to any other time. It has other practical implications: design for performance involves sophisticated technologies which either function or do not: and the work cannot wait for a sophisticated technology to work on the last day of tech rehearsals or opening night: the design, in a way, is the work, merely rendered in various versions.

Set → Image

Traditionally, the set is the space for the performance, with elements that are used by the performers (stairs, chairs, walls). In my design process, the set is the container for the image. The image is the overall design, where all elements make up an entire picture (often framed by the stage opening but not necessarily). The elements in the image are not necessarily to be “used” by the performers in a dramatic sense – the images are the performance, and the image is everything (light, sound, performers, movement, text, objects and so forth).

Industry standards → Local practicalities

The idea of industry standards is an outdated idea, especially if you are a designer working globally. There are local practicalities – communities where certain ideas, workflows and techniques are more common and accepted as a “normal” or standard way of working – but around the United States as well as globally, designers are met with vastly different platforms and workflows. This changes the way designers work. 

Knowing tools is – of course – very important, but being flexible and open to other ways of working is a necessity when each venue and situation will do things slightly or even vastly differently from one another. Designing, creating and working with systems and protocols is a way to render content physical, and content and concept is the performance, not specific software, tools or brands.