Our Methods

We decided to work in an almost completely image-based context for several reasons. First of all, it would make dealing with the geometry simpler. We could then include arbitrary geometry (such as 3rd-party meshes) with little extra cost. Also, it allowed us to use our notion of the size of the "paper" more intuitively.

Our strategy is simple: We created our own rendering pipeline to process the objects then paint the image. The pipeline has the following stages (details below):

  1. Coverage test
  2. Shading Filter
  3. Stroke filter
  4. Subtractive-color calculation (for overlapping strokes)
  5. Blur
  6. Draw edges
Most of these stages rely heavily on several input maps we calculate. The various input maps we use are presented below:

  1. Object boundary map.
  2. Normal Vector Map
  3. Edge map.
  4. Lighting/Shading map.
  5. Stroke direction map.
  6. Stroke shape map.

Details

Coverage Test

The coverage test is designed to mimic the quality of colored pencils where light strokes cause only some color particles to stick to the paper, and only on the top of the bumps in the paper. This is accomplished by the user supplying a coverage value (from 0 to 1) through a slider in the UI. This coverage value is used by the coverage test by allowing that fraction of pixels to be drawn, selecting them on a random basis.

Shading Filter

The shading filter takes the simply-colored output from the previous test and applies realistic lighting effects to them. Currently volumetric and projected shadows are unsupported, but the directional light can provide very pleasing results. This filter uses a lighting map that has white values where the color is unaffected and darker values where the color should be darkened (because the light isn't pointing directly at it for example). This map is calculated simply by having OpenGL render all objects as if they were white, and then the current color is multiplied by the resulting value.

Stroke Filter

The stroke filter converts the colors so they look like they were drawn with a pencil. To do this, we generate realistic strokes in a certain direction, compute a direction for each point. This direction is the screen-space angle the stroke should be drawn in. Then we rotate the screen coordinate by the angle and use the stroke generated at the new coordinate like an alpha-blending value.

To compute the direction, we use two methods: for flat-faced objects (cubes and disks) we choose one edge of each face, compute the angle the edge uses, and fill the entire polygon with that angle (since the strokes should be drawn all at the same angle). For round objects, we compute a direction by taking the cross product of the normal and camera vectors. This produces a direction perpendicular to surface and the camera. While not exactly how an artist would create the strokes, this provides realistic results.

Subtractive-Color Calculation

The model usually used in computer graphics, RGB, doesn't accurately model the properties of colored pencils (or most other similar drawing techniques). In short, the more color you add to an RGB color system, the lighter the result, while the opposite is true of colored pencils - adding more colors darkens the result. For example, try to imagine drawing on a black piece of paper. Now realize that no matter what color you add to white in an RGB system, it stays white.

For these reasons we developed our own subtractive color model which we feel accurately models paper and pencil. We have confirmed this by drawing over a red pencil stroke with a blue pencil stroke, and a dark violet color is the result, as it would be with colored pencils. Currently, our system does not restrict the user to a small pallette of pencil colors, but this technique would make that addition easy to implement.

Blur

The final output seems too precise for an artist to have rendered in pencils, so we provide the option of blurring it. The blurring is accomplished with a 5x5 gaussian filter.

Drawing Edges

Another technique some artists use is outlinining their work. This is used to clarify the edges of shapes. After all other rendering is done, the edges of the shapes are accented to accomplish this same goal. Edges are detected from the Object Boundary Map and Normal Map. We use 3x3 laplacian filters on each of these separately and combine the results. The object boundary map gives perfect object boundaries but not internal faces. However, the normal map gives us fairly accurate internal faces, so the combination of these two methods gives very pleasing edges.
Home