In this project you will write an image editing program that allows you to load in one or more images and perform various operations on them. Consider it to be a miniature Photoshop or a reasonably functional copy of XV.
Things on this page may be modified as circumstances warrant it, so check back frequently for changes.
When we grade your project, we will compile the code and run the resulting executable. If your project does not compile on the machines in 1355 then it will not be graded. We will test the operations you provide and look at the results to determine your grade. Your README file should state which operations you implemented.
The "submission date" will be determined by the latest modification date of any file in your directory. Do not modify files after the deadline unless you wish to incur the late penalty. The late penalty is 10% per day. See the class policies page for details.
The program should maintain the current image, which is displayed. The current image is modified by the various operations as outlined below.
All files will be in the Targa (tga) format. LibTarga supports pre-multiplied RGBA images. To load the alpha bits, tell it that you are loading 32 bit data, and it will fill the alpha channel (with ones if necessary) along with the color information. When you read an RGBA image with LibTarga, it returns pre-multiplied alpha pixel data. You must divide out the alpha channel before display, taking care to avoid dividing by zero.
As it is currently implemented, the skeleton will execute all the commands in a script file that is given as an argument (try test-all.p1s to see what happens). To specify arguments in Visual C++, go to the Debug part of the project settings dialog. The skeleton also provides a single line command entry dialog. To execute a command, type it in and hit the Enter key. Hitting Enter again will run the same command again. You can of course change the command. Try "load test.tga" to load the test image. "save test-save.tga" also works. (Leave out the quotes when you type things in.)
The skeleton is slightly modular in design. In particular, the widget for displaying an image is separate from the object for storing the image, and both are separate from the main window itself. I would recommend modifying Project1.cpp to call functions that implement each command. The main work functions should be members of the TargaImage class, or some other class that you create.
TargaImage stores RGBA. Many of the operations only need greyscale, so consider writing a special class for grayscale images, although there are pros and cons to that approach. The alternative is to store grayscale data in the red channel of an image, and store a flag indicating whether the image is grayscale or RGB.
Many of the operations you need to implement are very similar. For instance, all the filter operations differ only in the filter mask, not the basic filtering algorithm. Write your program to take advantage of such common operations.
You can provide additional menus, buttons and dialogs to access operations, if you desire. But there are no points in it.
Further down at the bottom of the page is a table of input/output image pairs for each operation.
|Load||load||filename||Load the specified image file and make it the current image.||0, provided|
|Save||save||filename||Save the current image to the specified file.||0, provided|
|Color to Grayscale||gray||Use the formula I = 0.299r + 0.587g + 0.114b to convert color images to grayscale. This will be a key pre-requisite for many other operations. This operation should not affect alpha.||3|
|24 to 8 bit Color||All of these operations assume that the current image has 24 bits of color information. They should still produce 24 bit images, but there should only be 256 different colors in the resulting image (so the image could be stored as an 8 bit indexed color image). Don't be concerned with what happens if you run these operations on something that is already quantized. These operations should not affect alpha.|
|Uniform Quantization||quant-unif||Use the uniform quantization algorithm to convert the current image from a 24 bit color image to an 8 bit color image. Use 4 shades of blue, 8 shades of red, and 8 shades of green in the quantized image.||5||Populosity||quant-pop||Use the populosity algorithm to convert the current 24 bit color image to an 8 bit color image. Before building the color usage histogram, do a uniform quantization step down to 32 shades of each color. Then find the 256 most popular colors, then map the original colors onto their closest chosen color. To find the closest color, use the euclidean (L2) distance in RGB space. If (r1,g1,b1) and (r2,g2,b2) are the colors, use sqrt((r1-r2)^2 + (g1-g2)^2 + (b1-b2)^2) suitably converted into C++ code.||12|
|Median Cut||quant-med||Use the median cut algorithm to convert the current image to an 8 bit color image. Use the centroid of each box as the color for that box. And each pixel color is mapped to the centroid of the box in which it falls. You actually get better results if you map each color to the nearest centroid (which is not necessarily the box the color falls into). Take the latter approach if you like. There are several other details in the implementation of this algorithm, which you should work out as you choose.||20|
|Dithering||All of these operations should convert the current image into an image that only contains black and white pixels. If the current image is color, you should automatically convert it to grayscale first (in fact, you could convert all images to grayscale - it won't hurt already gray images). These operations should all threshold dither the alpha channel, regardless of what they do to the gray channel.|
|Threshold Dithering||dither-thresh||Dither an image to black and white using threshold dithering.||3|
|Random Dithering||dither-rand||Dither an image to black and white using random dithering. Add random values chosen uniformly from the range [-0.2,0.2], assuming that the input image intensity runs from 0 to 1 (scale appropriately). There is no easy way to match a reference image with this method, so do not try.||5|
|Ordered Dithering||dither-order||Dither an image to black and white using ordered dithering with the
matrix shown below. Note that the lecture notes were incorrect: the
image pixels should be compared to a threshold that depends on the
dither matrix below. The pixel should be drawn white if:
I[x][y] >= mask[x%4][y%4]. The matrix is:
0.1250 1.0000 0.1875 0.8125 0.6250 0.3750 0.6875 0.4375 0.2500 0.8750 0.0625 0.9375 0.7500 0.5000 0.5625 0.3125
|Floyd-Steinberg Dithering||dither-fs||Dither an image to black and white using Floyd-Steinberg dithering with the improvements given in homework 2 (distribution of error to four neighbors and zig-zag ordering).||10|
|Filtering||All of these operations should modify the current image, and assume color images. The alpha channel should NOT be filtered. The alpha channel for all the test images will be 1 for all pixels, so you do not need to worry about the differences between filtering regular pixels or pre-multiplied pixels. Implement whichever approach you prefer.||15 for the first|
3 for any additional
|Box Filter||filter-box||Apply a 5x5 box filter.|
|Bartlett Filter||filter-bartlett||Apply a 5x5 Bartlett filter.|
|Gaussian Filter||filter-gauss||Apply a 5x5 Gaussian filter.|
|Edge Detect (High-Pass)||filter-edge||Apply a 5x5 edge detect filter derived from a Gaussian as indicated in the lectures. (Note that the lecture notes derive the edge detect filter from a Bartlett, so the matrix used in this operation should not be identical). Clamp pixel values that fall outside the range 0-255.|
|Edge Enhance||filter-enhance||Apply a 5x5 edge enhancement operator, using a Gaussian filter or the edge detect filter derived from a Gaussian (rather than subtracting a smoothed image and then adding back in, it is easier to compute the edge detect filtered image and then add that back into the original). It is also possible to come up with a single filter to do it all in one pass. You should clamp pixel values that fall outside the range 0-255.|
|Image Resizing||All of these functions should change the size of the current image by the appropriate amount. They should also operate on the alpha channel.|
|Half Size||half||Halve the image size, using a 5x5 Bartlett filter to perform the smoothing.||8|
|Double Size||double||Double the image size, using a 5x5 Bartlett filter to compute the intermediate pixel values.||12|
|Arbitrary Uniform Scale||scale||amount||Scale the image up or down by the given multiplicative factor. By uniform scaling I mean scale the x and y axes by the same amount, so the aspect ratio does not change. Use Bartlett filters for the reconstruction. The filter size should vary so that you always pick up at least nine (three by three) values from the input image in constructing each pixel of the output image.||20|
|Compositing||image||All of these operations should composite the current image, A, with the specified image, B, using A op B, where op is one of the operations below. The result should replace the current image.||15 for the first|
3 for any additional
Below is a table of input and output images for each operation. Read the associated notes. Some images you will not be able to reproduce - they are random in nature or depend on slight differences in the implementation of the operation. Do not waste time trying to re-produce an image if the notes indicate it is difficult. Instead, ensure that your algorithm produces qualitatively similar results, which is the primary basis for grading.
To help you compare images, a program called Difference has been made available. It takes two arguments - the filenames of the two images to compare - and shows how they differ in the color and alpha channels. Two identical images will have a completely black difference. The source code is also available.
It appears that Netscape corrupts the images when it downloads them. To avoid downloading, use the copies in p:\course\cs559-schenney\public\project-1\images\. Do not make copies unless you have to. It's a major waste of space.
|gray||colors-for-bw.tga||gray.tga||You should be able to reproduce this exactly.|
|quant-unif||wiz.tga||quant-unif.tga||You probably cannot re-produce this exactly. Your result should, however, show the same poor quality and color banding effects.|
|quant-popul||wiz.tga||quant-popul.tga||You probably cannot re-produce this exactly. A populosity algorithm should do a reasonable job on the gray floor, and not too bad on the browns. It should, however, draw the blue ball as gray, because there are not enough blue pixels to be popular.|
|quant-med||wiz.tga||quant-med.tga||You probably cannot re-produce this exactly. Your result should, however, show the same good quality. In particular, it should show the ball as blue and the floor as gray.|
|dither-thresh||church.tga||dither-thresh.tga||You should be able to reproduce this almost exactly. Some pixels may be different around the boundaries between white and black.|
|dither-rand||church.tga||dither-rand.tga||You have no chance of reproducing this exactly. Instead, you should get an image that is similar in style but not identical.|
|dither-order||church.tga||dither-order.tga||You should be able to reproduce this almost exactly. A few borderline pixels (those close to the threshold) may be different.|
|dither-fs||church.tga||dither-fs.tga||There's a good chance you can re-produce this exactly, but it is not essential. The character of your result should be similar.|
|filter-box||checkers.tga||filter-box.tga||You may get different results around the boundary, but interior pixels should be identical. This filter effectively extended the size of the input image by reflecting it about its edges.|
|filter-bartlett||checkers.tga||filter-bartlett.tga||You may get different results around the boundary, but interior pixels should be identical. This filter effectively extended the size of the input image by reflecting it about its edges.|
|filter-gauss||checkers.tga||filter-gauss.tga||You may get different results around the boundary, but interior pixels should be identical. This filter effectively extended the size of the input image by reflecting it about its edges.|
|half||checkers.tga||half.tga||You may get slightly different results, particularly around the boundary.|
|double||checkers.tga||double.tga||You may get slightly different results, particularly around the boundary.|
|scale 0.7||checkers.tga||scale-0.7.tga||You may get different results, but they should be qualitatively similar (no banding).|
|scale 1.3||checkers.tga||scale-1.3.tga||You may get different results, but they should be qualitatively similar (no banding).|
|comp-over||zred.tga over zcolorcheck.tga||comp-over.tga||You should get identical results. You need to look at the alpha channel to check.|
|comp-in||zred.tga in zcolorcheck.tga||comp-in.tga||You should get identical results. You need to look at the alpha channel to check.|
|comp-out||zred.tga out zcolorcheck.tga||comp-out.tga||You should get identical results. You need to look at the alpha channel to check.|
|comp-atop||zred.tga atop zcolorcheck.tga||comp-atop.tga||You should get identical results. You need to look at the alpha channel to check.|
|comp-xor||zred.tga xor zcolorcheck.tga||comp-xor.tga||You should get identical results. You need to look at the alpha channel to check.|