Changes in the surface quality of a photograph are often more disfiguring and distracting than basic changes to the image (fading, discoloration) so conservators may be asked to restore a surface and they must ensure that all of their treatments don't change the surface characteristics of the photograph. For such an important photographic quality, we are very poorly equipped to describe it. In the absence of a standardized terminology, Passafiume attempted to standardize categories of surface qualities using a sample book for comparison with actual photographs.
Variations of specular and raking light have been used to make surface qualities of a photograph visible for examination, but methods do no provide any quantifiable measure of the degree of surface change. A variety of microscopic methods have also been used to examine surface qualities including basic stereo microscopy, confocal microscopy, nomarski microscopy, TEM, SEM, and STEM. These methods have been used with some success, but tend to be limited by cost or the size of the field of view. Mechanical and optical profilometry systems are very expensive, can't handle large objects, and may be destructive and therefore have a low potential to be used in conservation labs in the near future.
The optical underpinnings and theory for edge reflection analysis (ERA) are reviewed with numerous image examples. The hardware set-up and measurement method are described. A MathCad program is required that is available from the author.
The day books of Edward Weston are full of notes detailing how the change from matt, buff platinum papers to glossy white silver gelatin papers changed the perception of his work. Therefore, the Edward Weston print collection at the George Eastman House was an ideal application case study for ERA. The Edward Weston Characterization Database is described along with samples of ERA examination reports for specific prints.
Click here for the formatted PDF of this research.
Now, that I am writing this report, my time as fellow in the first cycle of the Mellon Advanced Residency Program in Photograph Conservation is almost over. When fellows and faculty first mel in September 1999, we could hardly anticipate what we might be able to present as a result two years later. We knew we would have access to the best resources the field has to offer, but we also knew that expectations were high and that there was no model to follow. Today I am very grateful that I received this great opportunity. The program not only has allowed me to develop a new analytical technique for quantitative surface documentation and characterization, it has also strengthened my desire to further engage in research and education of photograph conservation. If the field considers the Mellon Advanced Residency Program and its contributions to the advancement of photograph conservation a success, this should be attributed largely to the fact that we were given the opportunity to work with leading experts not only in this but also in related fields. Interdisciplinary and international cooperation, which allows to see photo conservation from many different perspectives, has advanced all of us.
The program has also led to a better realization of how much more there is for the field to investigate. Computers and digital images, by some still considered the neces· sary evil in a conservation lab, will have to become a natural and essential part of a conservator's education and daily work. They will help us to gain new, more objective knowledge about the objects we are preserving.
When we look at a photograph, we look at its surface. When a photographer selects a photographic paper, its surface characteristics are as important to him as the right gradation, image tone or weight. The day books of EDWARD WESTON, whose photographs will be examined later during this work, are full of notes detailing how the change from man, buff platinum papers to glossy white silver ge latin papers changed the perception of his work (Weston 1981). For many decades, the photographic industry offered a wide variety of surfaces and even though the range of available paper textures has decreased considerably in recent years, the "perfect" surface gloss is still of major importance for many contemporary fine art photographers and influences the way their images are mounted and framed.
When viewing photographs, surface defects are often more distracting than a certain amount of fading or discoloration. Thus, maintaining or restoring the original surface is one of the major goals for a conservation treatment. Given the importance of a photograph's surface, it is surprising how limited our technology is to precisely describe surface condition. Conservators, registrars, curators or collectors all have the same difficulty. We may be able to describe relative differences between prints by direct comparison, but we lack standards for absolute surface description. To improve this situation, TANIA PASSAFIUME recenlly has developed a simple tool to help describe photographic print material properties by visual comparison to reference samples (Passafiume 2001). Similar tools have been developed for the graphic arts. (Cunning and Perkinson 1996)
These tools are very helpful to avoid misunderstandings about general print description, however, they do not attempt to exactly inform about an individual photograph's surface texture or even provide measurable information about it. For example, these tools do not differentiate the tiny difference in sheen between two RC prints, one infrared dried and the other air dried. If we knew an artist always used an infrared drier for his RC prints, such a subtle variation may be important. It might indicate a treatment history for a print or a different provenance.
The cases of dubious photographic prints we have recently encountered clearly demonstrate how difficult it is for conservators, those considered to be the material experts, to isolate specific material properties of a photographic paper and attribute them to a certain time of production, manufacturer, or artist. One approach has been to examine a photograph for the presence of optical brightening agents, as the date of their introduction into photographic papers is known (Messier 2000). However, even today they are not incorporated in every photographic paper. Thus, their absence does not conclusively prove that a paper bas been produced before the 1960s, and JENS GOLD'S recent research has shown that optical brighteners can easily be destroyed, if that is intended (Gold 2001). Paper fiber analysis is another tool to roughly determine the age of a photographic print (Messier 2000). The fiber composition indicates a certain time period during which the material has been produced. However, it requires that a small sample be taken from the object and therefore must be considered a destructive method. Both techniques are very valuable tools, but the successful interpretation of their results requires a great deal of background knowledge and reference data. They also do not tell when the paper was printed, as photographic papers can be stored for years or decades before actually being used.
The same holds for surface analysis, at least for the technique suggested in this paper. Due to lack of reference data, this technique will not yet be a usable tool for print authentication. This project will investigate, how surface texture can be measured, what technology is needed to perform the measurements and how the numeric data from such analysis could help to establish future reference databases. It is hoped that this technique will be explored further as it promises to improve standards in quantitative surface documentation and interpretation far beyond its application in photograph conservation.
During manufacture, the industry has a wide range of means to manipulate the surface characteristics of a photographic paper. With fiber base papers, texture is influenced by paper pulp composition, sizing, pressure during the drying and calandering operation, embossing rollers, amount and technique of baryta application (which could be provided by the paper mill, the photographic manufacturer or a specialized third company), and emulsion additives. With resin coated papers, the structure of the cooling drums for the top polyethylene layer plays a key role in determining the texture of the final product.
Throughout the history of photographic paper making, some of these variables were and still are hard to control. The raw paper making process is influenced by a variety of unpredictable factors, like the quality of the available wood, which changes over the year, water quality, etc. Consequently, many adjustments during raw paper production must be done manually and the consistency of the product largely depends on the skills and experience of the responsible engineers. Improved technology also leaves its marks. Until recently. only the felt side of fiber base paper was smooth enough to receive the emulsion, as the screen side left a distinct regular pattern in the paper. Today, the use of multiple screens of different width and irregular structure has allowed the coating of the screen side, too. (Dagan 2001)
The thickness, composition, application technique, and number of coatings of the baryta layer have a distinct influence on the final surface and have changed significantly over time. In the beginning, the baryta was applied with a roller and then smoothed with brushes (Fig. I). Later, the brushes were replaced by air knives to spread the coating evenly. (Fig. 2)
Rough structures, like silk or fabric patterns, are imprinted during the final paper making steps with an embossed roller. The texture left by these rollers could be considered the manufacturer's "fingerprint". Similarly, the cooling drum leaves an unmistakable pattern in the polyethylene during the manufacture of RC papers. Interestingly, such drums can be used only for about a year before they have to be replaced and they never are 100010 identical (Hornig 2000). If the differences are measurable, cataloguing these patterns could provide a perfect tool for paper dating.
The photographic industry mostly relies on the paper mills' quality control and does not maintain large-scale equipment for the control of the surface quality of their base papers. For them, visual comparison to reference samples is usually sufficient to judge whether the surface of a specific paper batch will be suitable for a certain product. (Dagan 2001, Reel/Badura 2000)
The photographic emulsion generally does not contribute much to surface texture. With a thickness in the range of I micron, it is comparatively thin, and in order to yield even density at even illumination must be of constant thickness. Thus, the emulsion follows the paper surface almost perfectly, which has features in the order of 10 to 100 microns. The only exceptions to this are maning agents, like starch or silica particles, which sometimes are added to the emulsion. As will be shown later, these additives create an overlaying microstructure, which adds itself to the base paper structure.
The influence of processing and especially drying of the print on the final surface are obvious, and handling and storage conditions may further alter it over time.
It may seem as if such variables, multiplied by the number of manufacturers and users of photographic papers, have lead to countless different surfaces that would be impossible to catalogue. Although this is probably not the case, as the number of base paper manufacturers, photographic companies and photographic papers has been finite, this variety could be an advantage. It can be assumed that an evaluation of systematic surveys of paper surface texture during the last 100 years will reveal different types of variations, which then can be correlated to different causes. A change in manufacturing technology at a paper mill could have influenced the products of all customers of that particular paper mill, all at the same time. Certain paper mills may have produced specific surfaces exclusively for certain photographic companies, and for a certain period. Certain photographers may have preferred specific papers and processing techniques during a specific period of their work, etc.
To obtain detailed information about changes in production technology from the photographic industry is difficult or impossible, but not so much because this is proprietary information. Actually, I have found open doors whenever I have approached the manufacturers with such questions. The problem is mainly the industry'S limited interest in keeping records and the loss of such records whenever companies went out of business. Additionally, the data we are looking for today may never have been collected in the first place. Generally speaking, it will not be until extensive surveys of photographic paper surfaces have been conducted that we know whether specific surface featu res appear only in a specific context.
Raking light illumination is a common technique to enhance the visibility of surface texture. How well this technique can modulate the texture depends on the diffuse reflectance of the surface. The more evenly it reflects incident light into all directions (acting as a "Lambertian" reflector), the less well a relief effect can be observed. The baryta of a white photographic silver gelatin paper is a very good Lambertian reflector, which means its diffuse reflectance is high and is evenly distributed over a wide range of illumination angles. Consequently, the angle of the incident light has to be quite low, around 10-1 5°, to model the surface texture sufficiently. As long as such images are used just for visual reference, simple arrangements may do the job. If, however, brightness variations from raking light are to be used to calculate surface topography with reasonable precision, certain parameters must be controlled:
As long as one looks at a white, uniform non-image area, the image could be used directly for evaluation. However, this is not always possible and photographic image details may interfere with the topographic information. (Fig. 3) Digital image processing can provide a solution for such cases (Arney 1994): A second image of the same object is taken, illuminated from the opposite direction. (Fig. 4) The two images are then divided by each other and the mean brightness value of both images (or 128 brightness values alternatively) is added to the result. This nevily generated image will only show the changes caused by the different illumination (the topographic information), but not the photographic image, which remained constant in both images and therefore is neutralized. (Fig. 5)
For specular illumination (from the Latin word "speculum" = mirror), the surface of the object is used as a mirror to reflect the image ofa (diffuse) light source into the optical path. It can provide excellent images of surface texture, as long as the surface has a sufficient gloss component and the specular reflectance significantly exceeds diffuse reflectance from the baryta or white polyethylene layer. Specular illumination can be achieved by looking at the surface at an angle, while the light source is at exactly the opposite angle. (Fig. 6) Another possibility is to look at the surface from a vertical position through a semi-transparent mirror or prism, which can be used to mirror the light source into the optical path and onto the surface. (Fig. 7) Depending on the surface angles and the size and evenness of the light source, the surface relief can be modeled with distinct brightness variations. (Fig. 8)
A typical application for specular illumination is numismatic photography, where it helps to model the relief of a glossy coin surface much better than raking light alone. In photograph conservation, specular illumination was used successfully for the documentation of daguerreotype deterioration (Arney 1993 ). Photographic papers with a polymer as image layer, like silver gelatin developing-out papers (including most matte surfaces), usually provide sufficient specular reflectance for this kind of illumination. It can be used in macroscopic and microscopic scale to document and measure surface texture, defects or irregularities.
If diffuse reflectance from photographic image details interferes with specular reflection from the surface, it is possible to separate the two by using polarized light. The following chapter on Edge Reflection Analysis will provide more information about how to use cross-polarization for this purpose.
Paper has a variety of surface-related properties (Scott 1995), including:
Whiteness and brightness refer to the spectral reflection of paper and our visual perception of color. They depend on surface features in the range of nm (nanometers, one millionth of a millimeter) and their interaction with the wavelengths of incident light. In the context of this research, we can neglect the spectral reflectance (color) of papers.
Gloss is a qualitative property of a surface and relates to terms like "sheen", "luster", "glare" etc. It is a pure surface effect and has little to do with the paper's color or brightness. Most interesting here is specular gloss, meaning the amount of light reflected at the same angle as the angle of incident light. The amount of specular gloss, or reflection, is determined by the refractive index of the object, as well as the angle and wavelength of the incident light. For uncoated papers, the standard measuring angle usually is 75°. For coated glossy papers, like photographic papers, the angle typically is much lower; but no standard angle has been defined. For this research, 60° have been found useful.
Smoothness and roughness essentially mean the same thing. They relate to surface features in the order of micrometers (one thousandth of a millimeter) to millimeters and refer to the finer irregularities of the surface texture compared to a perfectly flat surface. Roughness is defined as the ratio of the average deviation of the profile depth from the mean to a reference length.
Waviness is more widely spread surface features. Roughness can be considered superimposed over waviness. For a roughness analysis, the profilometric data (see below) often needs to be filtered to separate it from the waviness.
Lay is the predominant direction of a surface pattern and texture is the generic term for specific combinations of surface features.
During this research, we have characterized surface textures by their average spatial frequency and their average surface angle, or, mathematically more exact, the RMS (Root Mean Square) deviation of surface angles. The average spatial frequency is the average number of peaks per millimeter, which can be converted to an average element size, or peak-to-peak distance}. For time reasons, this project could not cover additional features, like element shape, or macroscopic patterns. So far, verbal description has been added to the field test databases. Future research may find measurements of these characteristics useful to more completely describe a surface.
Many methods have been developed to measure the roughness of paper. Some of these include optical methods, profile measurements, friction methods, optical contact area, ink contact area and air flow measurements (Scott 1995:63). These methods do not all provide the same information and their choice depends on the intended application of the paper. Only the first two kinds of methods are useful for valuable photographs, as they use non-contact techniques that do not harm the sample.
Depending on the size of surface features to be examined, different microscopic techniques can be used. For this research, the field of view (FoV) was in the range of several millimeters, and did not require elaborate and expensive equipment. At the beginning, a conventional stereo microscope with an attached video camera was used, which later was exchanged for a custom-built camera with an enlarger lens. In a second setup, a variable focus microscope from Edmund Scientific was used at low magnification (ca. 2:1). If the area of interest is much smaller, more complicated (and usually less accessible) equipment must be used. This equipment often does not allow a large photograph to be examined or requires that a small sample be taken from the objec!. Some methods should however be mentioned, as they are commonly used in surface analysis.
In conventional microscopy, the depth of field is extremely limited. Confocal microscopy overcomes this problem by excluding all unfocussed light from reaching the image plane. By scanning the object with a point light source in the XY direction and subsequent moving the object in the Z direction, a slack of section images is created which uilimalely is composed to give a topographic map of the object's surface. Its disadvantages are a rather small FoV and problems in handling thick or coated papers (Beland, c. and Mangin, J., in Connors 1995:1 -40).
Nomarski Microscopy, also called Differential Interference Contrast (DlC) Microscopy, uses polarized light passing through a Wollaston prism. The prism creates a second image. Surface defects of the object cause a phase shift of the reflected light. If an analyzer in front of the eyepiece is turned to the right position, only the surface defects are visible. (Bennett 1989:7-9)
For Transmission Electron Microscopy (TEM), which can resolve about 104 times smaller details than light microscopes, a repl ica of the surface has to be made first. The largest area that can be observed is approximately 10x10 um and the sample size must be small er than 5x5 mm. (Bennett 1989:10)
Scanning Electron Microscopy (SEM) can resolve even smaller details than TEM, down to 9-150 Å. Non-conducting objects have to be prepared with a layer of gold or gold-palladium to become visible. Newer SEM microscopes do not need Ihis guilding anymore and can accommodate samples of several centimeters in diameter. (Bennett 1989:10-11)
Scanning Transmission Electron Microscopy (STEM) and Scanning Tunneling Microscopy (STM) are even more powerful, the latter resolving single atoms in a crystal (Bennett 1989:11).
A widely used technique for surface analysis is profilometry, either with mechanical or optical devices (Wagberg 1993). Mechanical profilers measure topographical features by horizontal scanning with a vertical stylus, which touches the sample surface. The vertical movement of the stylus due to surface irregularities is convened into electrical signals, which then can be used to display a two- or three-dimensional image of the surface and to provide numeric measurements. The resolution depends on the diameter of the tip of the stylus and is in the range of 10-240 11m. Although the technique was not tested during this research, it is likely that the instrument could leave scratches or indentations on sensitive photographic surfaces.
Optical profilers use a laser beam that is focused on the surface of the sample. The movement needed in the Z direction to keep the laser in focus during a scanning movement in the XY direction is converted into electrical signals and evaluated as in the stylus system. The resolution of such systems can go from around 200 μm down to .01 μm 
Another optical technique to measure surface topography is White Light Interferometry. It records interference patterns caused by light reflected from the object and a reference surface. WLI can measure a wide range of textures and recently was suggested for conservation applications.  (Gaspar et al. 2000)
Most of the above systems require an investment in the five to six digit dollar range, if they have to be acquired for constant use. Without modification, their construction often does not allow large samples, like photographs, to be examined. Sometimes the sample is destroyed or likely to be damaged during the analysis. These machines may therefore be of limited use in the surface analysis of photographs, which may someday become valuable. Instead, Edge Reflection Analysis is suggested in this study as a new, alternative approach for texture analysis of glossy surfaces.
Young kids at school know that a small mirror reflecting the sun can create a nicely irritating bright spot on the classroom blackboard, which perfectly follows the movements of their hands. Maybe as adults these children will design laser shows where animated images are created by computer-controlled movement of mirrors. Or maybe they will analyze photographic surfaces. In either case, their success will depend on how well they understand a simple geometric law: The movement of a mirror image is directly related to the changing angle between the plane of the mirror and the location of the source image. Conversely, if the distances to and from the mirror, one can use the movement of the reflected image to calculate the angle at which the mirror was tilted. The surface of a photograph can be considered to be composed of many (infinite) small mirrors at many (infinite) different angles. Measuring these angles at defined positions informs about the topography of the surface. The following chapter will describe how surface texture can be calculated from the reflection of a light source. Currently, the technique, named Edge Reflection Analysis (ERA), does not yet provide information about single surface locations, but about the average surface angle and the average spatial frequency. It thus reduces the complex landscape of a photographic surface with individually shaped hills and valleys to a regular pattern of same-size pyramids. Future research will show whether and how ERA can provide a complete surface profile.
When at the beginning of this research it was found that specular illumination from a diffuse light source showed surface texture more clearly than raking light from a point light source, it was also noticed that the size of the diffuse light source played an important role in the system. Three simple, idealized examples (A-C) may serve to illustrate this.
(A) If we are looking at the surface at an angle and the reflection was from a point light source (located at infinite distance, emitting parallel rays) in an otherwise black environment, only those parts of the surface at the "right" angle would mirror the light source into our eye or a camera. The result would be an image of small bright spots in an otherwise black image area. A setup like this would only inform us about the presence of one specific surface angle at specific locations. To get more information, we theoretically could change the position of the light source or tilt the object table gradually and monitor the changing position of each individual reflection spot, but this would not be a practical solution.
(B) If we increase the size of the light source by placing diffusors in front of it, the range of "right" surface angles also increases. With increasing size of the diffusor, the reflection spots will become larger and formerly separated reflections will combine to larger areas. When trying to find appropriate dimensions for the diffuse light source, it will be found that different surfaces require different sizes in order to best present their structures over a representative image area. (Fig. 9) However, although one can nicely see surface texture, we are now lacking some essential information for the calculation of surface angles. The image does not provide references, which point within the area of the light source has caused which point within the reflection. We can calculate surface angles only, if we know this relation. If we further increased the size of the light source to infinite dimensions, a single reflection would finally cover the whole surface and no surface element would be visually detectable or measurable. Thus, a large diffuse light source seems to be rather useless for that purpose.
(C) We can overcome the drawback of lacking spatial reference information somewhat by changing the size of the light source to (almost) infinite in the X·direction and (almost) zero in the Y ·direction. Experimentally, a strip light source was created by masking a small fluorescent tube with aluminum foil, leaving open a one·millimeter wide and several centimeters long window. (Fig. 10) Now the origin of the reflection is much better defined and would allow the calculation of surface angles in the X·direction. A strip light source meets this requirement. However, this setup sti ll has its drawbacks. Not only does the width of the slit limit the precision of the measurements; the resulting images are also redundant. The upper and lower halves of the image are almost identical.
This experiment finally pointed to the edge of the light source as its most meaningful featu re. It is the edge that causes parts of the image to switch from black to white. The edge is one·dimensional, its position can be well defined and if aligned to the Y ·direction and visible in the center of the image, it allows a calculation of surface angles in the X·direction, as will be shown below. Experimentally, this illumination is realized by positioning a diffuse light source so that its reflection fills only half of the image. If the surface under examination were very smooth, like a perfect first·surface mirror, we would see an out·of·focus image of the edge. It would appear as a smooth transition area between a white and a black half of the image. If we tilted the surface a little, the edge image would change its position. Tilting angle and movement are directly proportional to each other. The measurement of this edge image movement is used to calibrate the ERA instrument.
If we replace the perfect mirror with a textured surface, we see many small edge images scattered over the width of the image that form a mosaic of the original edge image. (Fig 11) Still, the brightness at each specific location refers to the "ideal" edge image, i. e. we know at what position this brightness should occur if the sample had no texture and we know where it really is. From the distance between those two positions, we can calculate the angle, because at the beginning of the experiment we had measured the interdependence of edge movement and surface angle by tilting the perfect mirror.
The following paragraph will explain the optical geometry for Edge Reflection Analysis in greater depth. A less technically oriented reader may directly go to the paragraph on calculation of surface angles, p. 16.
The situation in example (A) with the reflection originating from a small light source at infinite distance could be simulated in the lab with a point light source and a collimating lens. Fig. 12 illustrates the principle. Ray A from the light source is reflected at the surface to reach the image plane at a'. If the surface is tilted at an angle aB the ray (now called B) may still reach a', as long as the lens aperture is opened wide enough. The brightness of point a' is independent of the surface angle. Only if the surface angle ac exceeds a maximum number of degrees, and the ray (now called C) no longer enters the lens, will point a' be black .
An image of an actual surface taken with this setup would lead to a pure black and white image, provided the variation in a is sufficient to partly exclude rays from entering the lens. The brightness signal at the image plane for such an image, displayed as irradiance versus position graph (I vs. x) is shown in Fig. 13, Obviously, the information about surface angle a will be very limited with this setup.
Example (B) is illustrated in Fig. 14: The rays from a diffuse (or Lambertian) light source are emitted at all angles. If this light source had infinite dimensions, there will always be rays (A, B, C) reflected to point a', independent of the surface angles. Consequently. the image at any position x' will be white and the information about surface angle a will be zero, as illustrated in the I vs. x graph. (Fig. 15)
If we limit the dimension of the light source in one direction (example (C)) so that a sharp edge occurs in the center of the image, we can increase the amount of information about a tremendously. (Fig. 16) If the surface under examination is a perfect first surface mirror, the brightness observed at x' would be a smooth transition from black to white, because the camera is focused on a, and the reflected image of the edge is out of focus. Reducing the lens aperture would increase the depth of field and would yield a sharper image of the edge, resulting in a more abrupt change from black to white at x'. If the surface is tilted al an angle -a or +a, the out-of-focus edge at x' will shift accordingly, as illustrated in the I vs. x graph. (Fig, 17)
In a structured glossy surface, different locations x have different angles a, which will creale superimposed images of the out-of-focus edge in the image plane. This appears like a spread-out image of the edge at x'. Brightness variations within this spread-out edge image relate to the angles ± a at the surface, Fig. 18 shows the effect with six different photographic surfaces.
Experimentally, it is not possible to work with a diffuse light source with infinite dimensions in all but one direction. Not only do such light sources not exit; they would also cause excessive diffuse reflection from the photographic image below the surface. This would interfere with the analysis and reduce the signal-To-noise (specular-to-diffuse-reflection) ratio, because with increasing size of the light source, the specular reflection does not get brighter; however, the diffuse reflection from the photographic image does. (Fig. 19) In order to keep the diffuse reflection to a minimum; a light box was used that created a reflection just a little bigger than the field of view.
Restricting the size of the light source causes some edge effects in the upper and lower part of the images, visible as a darker and stronger X-axis orientated edge images than in the center where it is mostly Y-axis oriented. This limits the useful area for spatial analysis to the center of the image.
The amount of diffuse compared to specular reflectance also varies with the overall brightness of the photographic image, the signal-to-noise ratio being best with black surfaces. However, if the photographic paper under examination shows a finely structured photographic image, it may not be possible to select a uniformly black field of view. Fig. 20 illustrates, how one can eliminate the diffuse component almost entirely by subtracting a second image, showing only the diffuse component from the image containing both diffuse and specular reflection. To achieve this, cross-polarized light is used.
Polarized light waves oscillate in only one direction. They can be created by fixing a polarizing filter in front of the light source. The filter acts like a screen of bars. Only waves that oscillate aligned to the bars will pass the filter, all others will be blocked. The polarized character of light is invisible to the naked eye, but if one holds a second polarizing filter in front of the eye, the filter can be turned until its "bars" are at a perpendicular position to the light waves. At this position, the filter neutralizes the polarized light almost completely.
In order to create images that allow a separation of diffuse and specular reflection, the light source is covered with a polarizing filter. Even after light from this light source was specularly reflected, it will still be polarized. However, the diffusely reflected light looses its polarization. If a second polarizing filter (analyzer) is put in front of the camera lens, it can be adjusted so that the specular reflection either does (aligned polarization), or does not (crossed polarization) enter the camera. The unpolarized diffuse reflectance cannot be kept from entering the camera by the analyzer. If a first image (HI) is taken with the analyzer adjusted to show a bright specular reflection, a 90° tum of the analyzer will (almost) neutralize it (H2). If image H2 is subtracted from image HI , the resulting image (H3) shows (almost) only specular reflection. (Fig. 20)
During image processing, results can be normalized, i. e. the contrast range spread out evenly from brightness value 0 (black) to 255 (white). All images analyzed during this project were such difference images.
The math for this project has been contributed by Prof. JONATHAN ARNEY, Center for Imaging Science at Rochester Institute of Technology (CISIRIT). It will not be explained in detail here, but is included in a joint article between 1. ARNEY and the author that is submitted for publication in the Journal of Imaging Science and Technology. A draft of this article can be found at the end of this report. An example for the application of the mathematical protocol is the Mathcad® file, which was used for this project and is available from J. ARNEY or the author.
The brightness variations observed within the images in Fig. 18 refer to many small images of the reflected edge that are shifted and superimposed due to a specific surface angle. To calculate that angle, we must know, (A) where the center of the edge would be located if the surface was perfectly flat, and (8) how much the edge moves over the image plane when the surface is tilted at a certain angle. Visually, the center of the edge is the middle of the transition from white to black. Mathematically, it is determined by scanning the image horizontally and finding the point of maximum brightness differences. lts movement due to changing surface angles is measured during the calibration of the system by looking at a perfectly flat surface and taking two pictures, one at the zero position and one with the surface tilted a known number of degrees. The number of pixels the edge has shifted in the second image can then easily be measured with an image processing software. The ratio of tilting angle to edge movement foons a calibration constant K for the analysis. See page 23 for more details about the actual calibration procedure.
Subsequently, mean brightness values are calculated for the central 100 pixels in each column, compared to the actual brightness of each individual pixel, and the variance is calculated. (Fig. 21) Only the central 100 pixel rows are used in order to reduce the influence of some edge effects at the top and bottom edge of the image, caused by the limited size of the light source.
From ￼￼￼P&oline, compared to the location of the pixels, and the calibration constant K, it is then possible to calculate the average surface angle. More precisely, the RMS (Root Mean Square) deviation of surface angles is calculated and called σα ("sigma alpha") in this study. The unit for σα is in degrees and informs about the dominant surface angle present in this surface. It can also be understood as the average slope of the hills and troughs that fonn the surface profile.
The precision of surface angle measurements still has to be specified. It depends on a variety of parameters, like image quality, respective optical geometry and how precisely the calibration data can be measured. With the geometry and image resolution used during this research, a movement of the edge image of one pixel meant a variance in surface angle of approximately 0.024°. Without further examination, a precision of ±0.05° (95%) was estimated for surface angle analysis.
At least with the geometry used during this research, the range of angles that can be measured in practice is probably clearly below 10-15°. This may seem surprisingly small, as the angle between optical axis and surface under examination was 60°, which in theory should allow measuring surface angles up to almost 30°. However, the useful range is further limited by the field of view and the currently used calibrating procedure. During calibration, the edge image is positioned in the center of the FoV (see Fig. 28 and 29, p. 22) and the object table is tilted to determine the calibration constant K. The maximum allowed tilting angle is reached just before the edge moves out of the FoY. A closer look at the geometry could verify, whether this maximum tilting angle is also the maximum surface angle that can be measured.
Another feature of interest is the frequency of the surface profile curve, which informs about the size of the structural elements. As there may be many different sizes present in a randomly structured surface, a profile curve is composed of many such frequencies overlaying (interfering with) each other. They can be identified by performing a 2-dimensional Fast Fourier Transform to the image data. Fourier Transform is another way to describe and display digital images. Instead of X/Y-coordinates with attributed brightness values, the Fourier Transfonn Function calculates frequencies that, by interference, best fit the brightness variations along a pixel row or column. (Russ 1999) The average of the most dominant frequencies used for a particular image are most likely to best describe its surface, if a single number must be used. It will be called Wmean ωmean ("omega mean", cycles per millimeter) in this study and its wavelength will be referred to as λmean ("lambda mean", millimeters), describing the average peak-la-peak distance of surface elements.
Although single numbers are helpful to describe the most dominant surface features and λmean is very useful if the surface shows a regular pattern, the overall distribution of all frequencies has a distinct influence on the visual appearance of a surface. Thus, to look at ωmean alone could be misleading when comparing two surfaces. A graph showing the frequency distribution can be a significant help during image comparison. Likewise, additional characteristic frequencies may be identified to improve texture description. Fig. 22 shows that a much wider range of frequencies is necessary to create the texture in image I than in image K. Image I also uses less large and more small frequencies than image K. The regular pattern in image J causes a distinct peak at a certain wavelength. Please note that the Mathcad software has automatically adjusted the scale of the Y-axis 10 the height of the graph window. Considering this, the actual differences between these graphs are even more impressive.
The precision of the spatial frequency calculation has not yet been verified. It is limited by the field of view and image quality respectively. For this study, image evaluation was based on video images with a resolution of 480 x 640 pixels, representing a FoV of approximately 7 mm. Thus, a single pixel represents 0.01 mm. The smallest resolvable element size is approximately 4·6 pixels, so the smallest measurable element size would be around 0.05 rom.
Any digital camera that can be attached to a microscope or a close-up lens can be used for image capture, as long as the images provide sufficient resolution to clearly see the surface features to be measured. Typically, digital video cameras with a resolution of 480x640 pixels are adequate. Higher resolution cameras can increase the precision of the measurements, but it is always possible to reduce the camera-object distance to reduce the field of view and resolve smaller details. A color camera is not required and if color images must be used, they may be converted to grayscale images to reduce calculation time and memory space. The camera should allow the manual override of any automated exposure control features, like automatic gain, contrast or color control. It should also provide a linear brightness signal and not a logarithmic one; otherwise, the images will have to be corrected before analysis.
The optical system must be corrected for close-up and macro (or even micro) photography, otherwise image distortions or partial unsharpness will reduce precision. A stereo microscope could be used, but because of its rather complicated optical geometry, it may be difficult to adjust the position of the instrument exactly. A microscope with variable focal length has been used with good success, as well as an enlarger lens from a photographic enlarger, which was attached to a custom made simple cardboard camera body. The lens should have an adjustable f-stop, which can be set to repeatable apertures. This aperture is used to control the depth of field, not to accommodate for different image brightness, as the f-stop position affects the sharpness of the reflected edge.
As the optical axis is positioned at an angle different from the ideal 90° to the object plane (the angle used in this research was 60°), it may be found that the lens docs not provide enough depth of field to provide a sufficiently sharp image, even if the smallest possible aperture is used. To overcome this, the lens can be tilted according to the SCHEIMPFLUG principle. If the elongated object, lens and image planes intersect at the same point, the complete object plane will be in focus. It is important though, that this tilt does not move the image too close to or even out of the angle of coverage of the lens. Please refer to handbooks for professional view camera photography for more details.
However, the industry seems to provide suitable camera bodies for Scheimpflug correction only for professional photographic equipment and not for microscopes or video cameras. For this research, a camera body was custom made by the author to allow the lens to be tilted.
The lens must be covered with a linear polarizing filter, which can be mounted to the lens directly or otherwise inserted in the optical path between lens and object. In the latter case, it must be aligned parallel to the lens to avoid image distortions. (Fig. 23)
To provide a reflection of even brightness on the surface, a small light box with a milk glass diffusor at the bottom was created. (Fig. 24) The dimensions of the base diffusor were 25 x 25 mm. The diffusor later was partly covered with black tape to reduce the width of the diffusor to a vertical window of approx. 25 x 8 mm. This provided a reflection just slightly wider than the FoV and reduced diffuse reflectance to a minimum. The light box was illuminated with a conventional cold light fiber optic illuminator with a diameter of about 12 mm, as is used for low magnification microscopy. Although sufficiently even brightness is achieved with the milk glass diffusor, other illumination devices, like a light integrating box with an open window developed by JAMES MICHAEL, CISIRIT, can be used successfully.
When examining very smooth, glossy surfaces, the width of the scattered edge reflection (the transition area from pure white to pure black) may only cover a small area in the center of the image. On the other hand, with the same setup and rough surfaces, it can exceed the FoV and one may not see pure white and black at all anymore. Image A and F in Fig. 18 show examples for very glossy and rough textures. It is possible to predetermine the width of the edge for a certain range of surface roughnesses by adjusting the distance of the top edge of the light source to the object. (Fig. 25) Alternatively, a moving slide can be made out of a piece of black paper. It partly covers the top edge of the light source and slides vertically along the body of the light box. (Fig. 24) For optimum analytical results, the edge area should cover between about % and 1/3 of the FoV.
To provide polarized light, a slide with two polarizing filter foils, one of which turned 90°, is mounted to the front of the milk glass. Depending on whether crossed or aligned polarity is desired, the appropriate polarizer can be moved in front of the diffusor. Another option would be to have only one polarizer at a fixed position in front of the light box and to adjust the second polarizer (analyzer) in front of the lens to either show or eliminate the reflection. However, we found the first arrangement to be less prone to image movements due to vibrations.
The light box is positioned over the object at an equal and opposite angle to the optical axis, so that the rear edge almost touches the surface, to simulate the "infinite" dimension of the diffuse light source (Fig. 26). To adjust its position precisely, a rack and pinion stage with adjustability for the XYZ-directions is extremely helpful.
A regular copy stand can be used to mount the camera. To adjust the camera position precisely, a rack and pinion mount is advisable, which should be al igned with the optical axis. Otherwise, the edge will move oul of the image when using the copystand's usual vertical mechanics to position the camera. Another option would be to mount the entire column at the desired angle to the copystand.
For calibration purposes, the object table must be tiltable relative to the optical axis. For the setup shown in Fig. 23, a glass plate hinged 10 the edge of a board was used. To allow sufficient tilting, the hinged plate should be mounted at some distance (1-2 cm) above the copy stand board. It should have its turning axis in the middle of the field of view and, if possible, at the height of the object plane. This will minimize movement of the image between the tilted and untilted positions during the calibration procedure (see below). It is essential that the angle for the zero position of the object table can be adjusted precisely.
To digitize the video signal and to allow image capture with a computer, a framegrabber must be installed. These devices are available as external equipment, plugged into the parallel port of the computer (useful for laptop computers), or as internal cards (for desktop computers) (Fig. 27). We have used both, a built-in card for scientific application and an external model for amateur use, with good success. If the operating software does not provide a big enough preview image, or shuts itself down a few seconds after activation, an additional TV monitor is very helpful to calibrate and adjust the system and for focus control (Fig. 28).
During the calibration steps, image processing and analyzing software is required. There seems to be no need for expensive commercial products at this point, as sufficiently powerful freeware can be downloaded from the Internet. The software used during this project were Scion Image for Windows®, version Beta 4.0.2, by Scion Corp. www.scioncorp.com and CISlab®, version 1.0.85, by Center of Imaging Science/Rochester Institute of Technology, www.cis.rit.edu. Adobe Photoshop®is helpful for some intermediate steps, but not a requirement. The analysis was done with a file written for Mathcad®. versions 8 and 2001 by Mathsoft, Inc., www.mathsoft.com. Evaluation data were stored in the database program FileMaker Pro®, version 5.0v3. by FileMaker, Inc, www.filemaker.com. For future applications of Edge Reflection Analysis as proposed in this research, a custom written new software, combining all steps from system calibration to image capture, data storage and retrieval in one application, would significantly facilitate and speed up the work.
It is essential that the images acquired clearly show the surface features to be analyzed. Otherwise, exact measurements will be difficult or impossible. The image must be sharp and the elements to be measured must be within the resolution of the imaging system. The brightness range of the image (image contrast) should make good use of the tonal range of the recording media. Unsharpness, lack of contrast or over- and underexposure will create noise and thus will reduce measurement accuracy.
For this research, the Bitmap (*.bmp) fonnat was used. As long as the image processing and analysis software can read it, any uncompressed file format may be suitable.
For calibration purposes, a photographic reference print on smooth, glossy black and white RC paper, is used, showing the contact print of a glass reticle with a 1110 nun scale. With this print on the object table, the field of view (FoV) is adjusted on a preview monitor. (Fig. 29) For this procedure, it is preferable to have the polarizers adjusted so that no edge reflection is visible. An image of the reticle is then captured, the image scale measured in an image processing software and stored for future reference.
For most structured photographic surfaces, a FoV of around 7 mm will show the desired detail, but smooth, glossy surfaces may need a smaller FoV to detect significant features. If the FoV must be increased significantly for very rough surfaces, it should be checked whether the image still resolves the desired texture detail, or whether a higher resolution camera must be used.
Any change of scale during an image acquisition session will require a new system calibration.
The sharpness of the reflected edge images and the width of the scattered reflection area depend on the f-stop, the distance of the edge and the roughness of the surface. (Fig. 25) For this research, an edge-toobject distance of around 5-6 mm has been found to provide a range of useful edge image widths for many photographic surfaces. Once the edge distance has changed, the edge movement (see below) has to be remeasured.
During calibration, the center of the edge image should be positioned in the center of the FoV by carefully adjusting the X/Yposition of the light source. At a certain time during this study, this was an essential requirement, as a specific version of the custom-written Mathcad® analysis files would not give correct results for sigma alpha σ α with the edge reflection out of the image center. The Z-position of the light source is chosen so that its lower edge almost touches the surface of the object. Parallel alignment of the top edge of the light source and the surface under examination should be controlled to assure a vertical position of the edge image. This is a prerequisite for a correct analysis.
As long as the reference print has a very smooth surface, it can also be used to determine the edge image movement δΡ at a specific surface angle δα. As was explained in the chapter on Edge Reflection Analysis (p. 16), this measurement determines the calibration constant K for a given equipment geometry. Two images are taken, both showing the edge. For the first image (Fig. 30), the object table is at the zero position. for the second it is tilted a known number of degrees (Fig. 31). The images are then combined using an image processing software and the distance between the two edges is measured. (Fig. 32) The corresponding angle is calculated from its tangent, given by the ratio of the height the object table was lifted to the distance turning axis - edge of table. As long as the object table was tilted sufficiently to make the edge image move about 114 to 1/3 of the FoV, an inaccuracy of a few pixels should be of no concern. It will have very little effect on the surface analysis.
For Scion lmage®, the following procedure provided good measurements for edge movement:
|Open both images and from the Options' menu, apply Switch to the 'Process' menu and chose then||Threshold. Binary, Make Binary.|
|From the Edit menu, chose to invert one of the two images.||Invert|
|Go back to the Process menu, chose and add Image A to B. By default, Scion Image suggests to multiply the result by 0.5 and add 128 brightness values,||Image Math|
|which is o.k. Activate the button and click||Real Result OK.|
|If the edge is too irregular to easily measure its position, you can chose the||line drawing tool|
|from the Tools menu and draw two black lines along the edges to estimate the exact position of each edge. Again from the ' 'Tools menu, chose the||measuring tool|
|and place the beginning and end points at the edges or lines. From the Analyze menu, chose||Measure,|
|then look up the length in the Information window. You may have to repeat the measurement a few times and average the results.|
If the FoV was detennined by focusing on the reference image, but the actual object under examination is of different thickness, focus has to be readjusted. It is preferable to move the complete camera, instead of adjusting the focal distance. Changing the focal distance would lead to a different scale of reproduct ion, making new calibration necessary. During focusing the lens aperture should be opened completely. If the Scheimpflug principle cannot be used, only part of the image will be in sharp focus. As a rule of thumb, the point of sharp focus should not be in the center of the image, but slightly shifted towards the foreground. The increase in depth of field with smaller aperture extends about 1/3 to the foreground and 213 towards the background.
After focusing, the lens aperture should be closed until the complete field of view is in focus.
The two polarizing filters will keep a lot of light from reaching the camera. If the image appears too dark with closed f-stop, a stronger light source should be selected rather than opening the aperture. Image brightness cannot be controlled on the video monitor, but should be checked with the preview image of the framegrabbing software and by looking at the histogram of the acquired image. The brightest reflection spots should reach but not exceed pixel values 250·255.
Ideally, a homogenous black area should be photographed, as it provides the best signal-to- noise (specular-to-diffuse-reflection) ratio. If this is not possible, an area as dark as possible with as little photographic image detail as possible should be selected.
The center of the edge reflection should be located in the center of the image, to facilitate correct results from the sigma alpha σα calculation. However, even if the position of the edge reflection was adjusted correctly during calibration, distorted or wavy objects with a different mean surface angle than the surface used for calibration may let the reflection appear out of the center. By adjusting the angle of the object table, the edge reflection can be moved to the center of the image, thus restoring the original geometry used for system calibration. Please note that this is only possible, if the turning axis of the object table was placed correctly in the center of the FoV, otherwise tilting the object table will move the surface out of focus.
Four images are required, if it is assumed that the surface texture shows lay (spatial orientation) and that some diffuse reflection (like photographic image detail) is present. For the first photograph, the polarizers are aligned, i. e. the edge reflection is visible. For the second photograph, the polarizers are crossed so that the reflection disappears. It is essential that the object or the camera do not move between the images. The sample is then turned 90° and two more images are taken, again with aligned and crossed polarization.
If the field of view shows a perfectly homogenous material without differences in diffuse reflection, then the image with crossed polarity is not necessary. If lay can be excluded, a single image of an edge reflection is sufficient.
In order to keep track of the many images during this research, those with specular reflection were marked with a "+" at the end of the file name, those without with a "-".
The following procedure can be used to perfonn a texture analysis with the Mathcad® file ("ERA.mcd") that was developed for this research. Please note that the mathematical protocol in this file is intellectual property of Ihe Rochester Institute of Technology and protected by international patent laws. You may need a license to use it. Royalty-free licenses are available for scholarly, not-for-profit use. Please contact the author or the Rochester Institute of Technology for details before you apply this software. The file can be found on a CD-ROM at the end of this report. To use it, Mathcad® 8 or 200I must be installed on your computer. It is convenient 10 have the Mathcad® file and the image files in the same directory, and to work with copies of the Mathcad® file only.
Note that Mathcad® performs the calculation only until to the equations visible on the screen. You must scroll down to the "Results" area before activating the calculation ("F9"), in order to obtain complete and correct data.
In order to test the usability of Edge Reflection Analysis for surface texture measurements, field tests were conducted. The tests were carried oul although ERA still was at an experimental state. Neither the precision of the analysis nor its repeatability were specified. Thus, the main purpose of these lests was to find out, what modifications the technique would require, and what additional information would be necessary to establish future reference catalogues of photographic print surfaces.
Images from sample books, those distributed by photographic manufacturers, served as references. These books inform about the availability of photographic papers during a discrete time period, and provide exact product names. Most papers in these booklets show distinct surface textures that the material would yield if processed and finished according to manufacturer's guidelines. However, specific processing, drying and finishing techniques applied by individual photographers may have led to varying textures of the same product. This is especially true for fiber base papers with glossy surfaces that could either be ferrotyped, air dried, and dried against cloth or between blotters. ERA probably is capable of quantifying such alterations, but an adequate study was beyond the scope of this project. Before a serious attempt to create a photographic paper reference database can be started, such a study will have to be done, to separate material- specific texture features from those influenced by a user.
So far, 73 paper surfaces from five different sample books and three different manufacturers have been entered into the database. Some books could not be unbound for this project, and only allowed Edge Reflection images in one direction. Still, 322 images were laken, processed and evaluated; 159 images were inserted into the database records. Without software to combine the numerous steps from image acquisition to database entry in a single application, this work was extremely time consuming. AIthough this is a solvable problem, a huge amount of work will have to be done to collect a representative number of reference records. It indicates that future databases will very likely have to be limited to specific groups of photographs.
When looking at the records in the Photographic Print Characterization Database, it will be found that the two features "average surface angle" and "average element size" calculated from ERA are not sufficient to completely describe texture. Visually many more features can be observed and some verbal descriptors were added to illustrate, how element shape ("fibrous", "shrunken gelatin", "orange skin", etc.) or defects ("pinholes") could be described. This essential information wi ll have to be standardized and structured. More sophisticated image analysis techniques may be capable to extract some of it from the edge reflection images, but visual judgement will probably always be an essential part of surface characterization. ERA also does not yet provide complete surface profile information. Once the technique has been refined to this point, many more textural properties will be measurable.
It is interesting to see the range of surface angles or element sizes represented by the samples in this database. Average surface element size varies from 0. 11 mm to 0.72 mm, and average surface angles vary between 0.3 and 4.0 degrees. In the graph below, the records are displayed according to their average surface angle and average element size and marked according to their sheen. (Fig 33) Although the polynomial trend lines Seem to suggest a correlation, the data are scattered over a wide range of values. Matte prints show relatively small element size, but their average surface angle may be anywhere between one and four degrees. Glossy prints do not appear to require a specific element size and tend to exist with the full range of surface angles. The same is true for semi-matte surfaces. This indicates that ERA probably is of limited use to quantify surface sheen.
However, ERA seems to be capable to quantify smoothness or roughness. Fig. 34 shows the same records again, this time marked according to their general texture description that was attributed based on visual perception. Now the data form distinct groups with very little overlap.
If a paper shows large surface elements, but its surface angles are small, one will name it "smooth". It is still smooth, if surface angles increase up to about 2.5 degrees, but element size decreases. Above approximately 2.5 degrees, and with small to average element size, a surface is classified as "fine grained". With increasing element size, the surface angle can vary between large and small values and the paper will be called "rough". ERA seems to provide results that nicely correspond to visual perception of smoothness or roughness.
However, the measurement of element size may need refinement, because the current mathematical protocol seems to prefer the smaller elements (responsible for the matte appearance) over the visually more perceptible large elements during the calculation of the average spatial frequency. The single black square in the upper left part of Fig. 32 is an example for this phenomenon. The respective sample has a rough texture but a matte surface, causing a misleading small value for element size. For rough textures with a certain gloss component, the analysis seems to be correct. The graphic display of all frequencies present in a particular image (FFT-graph, noise power vs. frequency, see Fig. 21) shows the actual size distribution, but for a database it would be helpful to have additional numbers for second or third order element sizes.
At this point, the above attempt to find correlation between measured data and visual judgement is of preliminary nature only, and must remain incomplete and simplistic. Combinations of measured data may however be used to generate verbal information, which in tum can make the database more useful for less technically oriented users. An example for such a feature was built in with the field "orientation". A "case" scenario was programmed in FilemakerPro, which uses the percentage of element elongation to create a verbal descriptor. Below 5 %, no orientation was stated. Between 5-15 %, orientation was considered "minimal", between 15 and 25 % "slight". Above 25 %, element orientation was judged "distinct". These terms refer to the visual perception when switching between images taken in the "v" (vertical) to the "h" (horizontal) direction. Vertical and horizontal are arbitrarily chosen assignments. For evaluation purposes, it may be more appropriate to distinguish between the directions showing the largest and smallest element size. A more thorough research may define other threshold values for specific degrees of orientation.
An examination report for a typical photographic print is included at the end of this chapter.
The structure of the Edward Weston Print Characterization Database is very similar to the Photographic Print Characterization Database. They could actually be combined or made relational, and specific information from each database displayed in shared layouts. Another relation could be created to a database created by Tania Passafiume (Passafiume 2001). It contains thorough and standardized cataloguing information, based on visual examination and George Eastman House catalogue data, as well as texture and sheen specifications based on visual comparison to reference samples. This will ultimately lead to a very powerful research instrument. More such data could be collected in other E. Weston collections, and contributed to the databases online. If its evaluation is combined with the connoisseurship of knowledgeable collectors, curators, etc., very precise information about the photographer's work could be obtained.
When browsing through the database records, one will see that the range of surface textures is much smaller than represented by the samples in the Photographic Print Characterization Database. After looking at a couple of records in random order, a viewer will probably loose track of the more subtle differences between them. For a more systematic approach, ERA data and verbal descriptors can be used to design database queries, which will help to find correlating images. However, this will be a time consuming task, as standard combinations of surface properties are not yet defined. A user will have to adjust statistic evaluation parameters carefully. More research is needed to establish a set of useful technical measurements, which in combination with descriptors from visual judgement and data from additional resources best describes the relevant differences or similarities between surface textures. The distribution of average element size within the prints from the George Eastman House collection is shown in Fig. 35. The graph seems to indicate that the distribution is random (or Gaussian). To verify this, the precision of the measurements would have to be specified. If the missing bars at 0.22, 0.24 and 0.26 mm were not caused by experimental error, the voids would indicate that actually none of E. Weston's photographs shows texture with such average element sizes.
An examination report for an E. Weston print is included at the end of this chapter.
Before one can use the databases that were created for this project, a user is required to purchase the software FilemakerPro 5.0. The future usability of the databases largely depends on the availability of this product and the files will have to be updated to ensure that they will be compatible with future versions of FileMaker. Future software design may consider the programming of a stand-alone application that runs directly from a CD-ROM drive, the user's computer harddisc or an Internet server. It could combine image analysis and database, and could include a visual finding aid, similar to "field guides" in other disciplines. It can be assumed that future users in the field of photograph conservation will not be perfectly familiar with the details of surface texture analysis. Depending on the intended application, the software may offer different levels of information. The following example could illustrate a typical scenario:
A user with access to digital imaging equipment wants to identify an unknown photographic material. He/she creates an image of the material's surface as described in the chapters on ERA and Image Capture. After the image and calibration data have been entered into the database, the software performs the analysis and displays the image in the middle of the screen, surrounded by a variety of reference images with similar textures. The user would be asked to select one image that looks most similar to the sample he/she wants to identify. Clicking on this image would display a new set of reference images with more similar properties. The user would continue this selection process until he/she reaches the most precise level of information or finds a matching reference. Technical data, including information about the statistic probability of a real match to a reference from the database, could be displayed on request and included in a printed examination report.
Even without digital imaging equipment, the database could serve as field guide. A modified portable microscope would allow a visual on-the-spot comparison of unknown photographs to database references. An attached diffuse light source could provide the edge reflection, and a built-in reticle could allow the measurement of texture element Size.
This study was realized as capstone research project during the Mellon Advanced Residency Program in Photograph Conservation. It has resulted in the development of a new analytical technique for the measurement of surface topography, which we have named Edge Reflection Analysis (ERA). The technique is useful for surfaces with a certain gloss component, like photographic prints. It is simple, non-destructive, and in its present state based on readily available or easy to build equipment. At present, the technique allows to quantify average surface angle and spatial frequency of surface texture elements. ERA was developed for analysis of historic photographic papers with the intention to create reference databases for treatment documentation, material identification, etc. However, its application may well extend beyond photography.
The technique uses a digital macro image of the surface under examination, showing the specular reflection of the edge of a diffuse light source. This reflection will be more or less scattered, depending on the amount of surface roughness present. An evaluation of this scattering, based on pixel brightness and location, allows the calculation of spatial surface properties.
The precision of the measurements still has to be specified. For surface angle analysis, a precision of ± 0.05 degree is currently assumed, with a range of measurable angles probably below 10 degrees. The size measurements of spatial frequency are assumed to be correct to the second digit.
The evaluation of average surface angle vs. average texture element size corresponds to the visually perceived surface in photographs in terms of smoothness or roughness. Thus, ERA provides the possibility to standardize basic terminology and improve communication among scholars. However, the two basic parameters provided by the current mathematical protocol are insufficient to quantify the complex features of photographic surface texture completely. Future research will have to extend the mathematical protocol in a way to provide full surface profile information, including the ability to measure texture element shape and distribution, gloss, etc.
The information gained from ERA was used to create exemplary databases of photographic paper properties. Once a more representative number of samples has been collected, such databases could serve as references for material identification in a photographer- specific or material-specific context. Such information is highly desirable for the conservator, curator, collector and dealer of photography and may contribute to the development of value systems. ERA can provide additional evidence in an authentication process, and improve the objectivity of condition reports before and after a conservation treatment, thus enhancing the knowledge about specific treatment techniques and their influence on the object.
The images created for ERA can be used alone even without mathematical evaluation for visual comparison of surfaces. They provide pure and clearly visible texture infonnation and make good use of the tonal range. This could help to create visual field guides of material surfaces; either in printed or digital formats. Such catalogues could serve as reference for on-the-spot analysis for those needing to make quick comparative observations of surface types. A simple, modified loupe or ponable microscope could provide the required edge reflection image, and would allow visual sample-to-reference comparison.
Finally, it should be noted that this research has not yet demonstrated, whether or not the surface texture of any photographic paper shows features of sufficient significance to allow attributing the material to a specific manufacturer or user context. Extensive surveys and lab experiments will be required to allow meaningful answers to such questions. This research has provided a new tool; the product still must be built.
Andresen, M. et. al., 1930. Erzeugung und Priifung lichtempfindlicher Schichten, Lichtquellen. In Hay, Alfred, ed. 1930. Handbuch der wissenschafilichen und angewandten Fotografie. Vol. IV. Vienna: Springer.
Arney, 1. and Stewart, D., 1993. "Surface Topography of Paper from Image Analysis of Reflectance Images". In IS&T, ed., Journal of Imaging Science and Technology, 37: 504. Springfield, VA: Society of Imaging Science and Technology.
Arney,1. and Maurer 1.,1994. "Image Analysis and the Documentation of the Condition of Daguerreotypes". In IS&T, ed., Journal of Imaging Science and Technology, 38: 145. Springfield, VA: Society of Imaging Science and Technology.
Bennett, J. and Mattson, L., 1989. Introduction to Surface Roughness and Scattering. Washington, DC: Optical Society of America.
Cunning, E. and Perkinson, R., 1996. The prinl council of America sample book.Sun Hill Press.
Connors, T. and Banerjee, S. ed. 1995. Surface Analysis of Paper. Boca Raton, FL: CRC Press.
Dagan, S., 200 I. Personal infonnation May 200 I. Rochester: Eastman Kodak Co.
Gaspar, P. et aI., 2000. "Topographical studies in the conservation of statuary materials". V&A Conservation Journal, 36: 11-14. London: Victoria and Albert Museum.
Gold, J., 2001. Invesigation of methods used to misrepresent the condition and age of photographs. Unpublished research report. Rochester: Advanced Residency Program in Photograph Conservation.
Gray, G., 1986. From Papyrus to RC Paper: History of Paper Supports. In Ostroff, E. ed. Pioneers of Photographic Science and Technology. Springfield, VA: Society of Photographic Scientists and Engineers.
Hornig, K.. 2000. Personal information September 2000. Osnabrock: Felix Schoeller Holding GmbH.
Messier, P., 2000. A methodology to dare photographs relative to 1950. Paper presented at AIC/PMG conference 2000. Philadelphia, PA: AIC.
Passafiume, T., 2001. Silver gelatin DOP sample book. Unpublished research report. Rochester: Advanced Residency Program in Photograph Conservation.
Reel, H. and Badura J., 2000. Personal information September 2000. Leverkusen: Agfa-Gevaert AG.
Russ, J., 1999. The image processing handbook. 3rd ed. Raleigh, NC: CRC Press.
Scott, W. and Abbott, J. in collaboration with Trosset, S., 1995. Properties of Paper: An Introduction. 2nd ed. Atlanta, GA: TAPPI Press
Wagberg, P. and Johansson, P., 1993. "Surface Profilometry: a comparison between optical and mechanical sensing on printing papers". TAPPI 1., 76:1 15
Weston, E., 1981. The daybooks. Vol. 1&2, 2"' ed. Nancy Newhall ed. New York, NY: Aperture.
The publications from the above list with the author's name underlined provide more useful information than relevant for the specific reference and should be included in the recommended reading list. Depending on the reader's knowledge about digital images and photographic technology, some basic handbooks on this subject may be helpful. In addition to that the following publications can be recommended.
Besser, H. and Trant, J., 1995. Introduction to Imaging: Issues in constructing a database. Los Angeles, CA: The Getty Art History Information Program.
Schildgen, T., 1998. Pocket guide to color with digital applications. Albany, NY: Delmar Publishers.
Wentzel, F., 1960. Memoirs of a photochemist. Philadelphia, PA: American Museum of Photography.
The following equipment and software were used for this research:
|Digital color video camera:||Hitachi KP-D 50|
|Lens:||8eslar 1:3.5/75 mm|
|Illumination:||Lumina FO 150 cold light source with fiber light guides and custom-made diffusor box|
|Computer:||PC clone notebook computer with Intel Celeron Processor 300 Mhz, 128 MB RAM, OS Windows 98.|
|Framegrabber:||Snappy, Play Inc.|
|Operating Software:||Snappy Video Snapshot 4.0, Play Inc.|
|Image processing software:||Scion Image for Windows®, version Beta 4.0.2, Scion Corp. www.scioncorp.com |
CISlab®, version 1.0.85, Center of lmaging Science/Rochester Institute ofTechnology, www.cis.rit.edu,
®, versions 8 and 2001, Mathsoft, Inc ., www.mathsoft.com.
|Data analysis and storage:||Microsoft ®Excel 97, Microsoft Corp., www.microsoft.com.|
FileMaker Pro®, version 5.Ov3, Fi leMaker, Inc., www.filemaker.com.
|Report Design:||Microsoft ® Word 97, Microsoft Corp., www.microsoft.com.|
To view the draft article click here for the formatted PDF of this research.
To view Supplement t 10 "Documentation and Characterization of Photographic Surfaces by Edge Reflection Analysis" click here for the formatted PDF of this research. The original database is a FilmakerPro file that can be found on the CD-ROM at the end of the project report.
available from the conservation library at geh.
This work would not have been possible without the generous support of many organizations and individuals, to which I would like to express my deepest gratitude. It was the Andrew W. Mellon Foundation and ANGELICA RUDENSTINE, who provided the funding for the two-year fellowship and who had the insight to extend their support whenever it was needed. I have received additional funding from the German Alfried Krupp v. Bohlen und Halbach Foundation, represented by DR. BERTHOLD BEITZ, which was a tremendous help and allowed me to take the financial responsibility for my family during my time in Rochester.
I thank GRANT ROMER and JIM REILLY for their great support throughout my time as Fellow in the Advanced Residency Program in Photograph Conservation, especially for pointing to the field's needs and helping to shape this project. They both helped to get (and keep) me on track and were there when I needed their help and understanding. Without JONATHAN ARNEY, his experience in imaging science, mathematic knowledge and enthusiasm, this work would not have come even near the scientific quality it has now. It may not even exist at all.
Thank you, FRANZISKA FREY, for eagerly puning me in contact with the digital world and for valuable suggestions throughout this work, GARY ALBRIGHT, for not insisting on more treatments, MARK OSTERMAN, for sharing his deep insight in 191h century technology (and Dapper Dan), and DOUG NISHIMURA for his sympathy for my ignorance in chemistry and statistics.
I would also like to thank my co-fellows ALEXANDRA BOTELHO, LAURA DOWNEY, JENS GOLD, DANA HEMMENWAY, KATHERINE KILDE, TAMARA LUZECKYJ, and TANIA PASSAFIUME for letting me get on their nerves, for their support and for the fun we had together.
A big "Thank you" also goes to the George Eastman House staff, namely the Archives with DAVID WOOTERS, JOE STRUBLE and JANICE MADHU, for making the Edward Weston prints available, to RICK HOCK and his staff at the exhibitions departmenr, to LAURA BROWN and others in the administration, to ANDRA RUSSEK for keeping in touch, and to the staff of the Image Permanence Institute for their kind support. It would fill more pages to describe in detail the help I received from many other individuals, but at least their names should be mentioned here in alphabetical order:
JOACHIM BADURA, Agfa AG; SANDRA DAGAN, Eastman Kodak Co.; BODO V. DEWITZ, Agfa Fotohistorama; NICHOLAS M. GRAVER; MAx HEIGL; DEBBIE HESS-NORRIS; KNUT HORNIG, F. Schoeller Holding GmbH & Co. KG; NORA KENNEDY; PAUL MESSIER; JAMES MICHAEL; HENNING REEL, Agfa AG; and DAVID 1. VALVO, Eastman Kodak Co.
Klaus Pollmeier was an ARP fellow from 1999 to 2001. This report is an application case study of the use of edge reflection analysis and was submitted as a supplement to Documentation and Characterization of Photographic Surfaces by Edge Reflection Analysis, the author's capstone project report. Currently, Klaus is the program coordinator for the Conservation of New Media and Digital Information at the Staatliche Akademie der Bildenden Künste. In Stuttgart, Germany.