Repolygonizing Fonts

This is the third of a four-part series on the technology behind Scribd’s HTML viewing experience. You might like to read part 1, “Facing Fonts in HTML” and part 2, “The Perils of Stacking,” if you haven’t already. Part 4, “Plan B: Font Fallbacks” is coming soon.

Every single day, Scribd processes over 150,000,000 polygons in order to convert your uploaded documents into our new HTML format (among others).

So why on earth would something like this be necessary? In order to get to that, we first have to talk a bit about fonts. A font, in its most simplistic form, is just a bundle of polygons, with a Unicode index attached to each one. It’s these font polygons this post is about.

Of course, Truetype fonts and their descendants (.eot fonts, OpenType fonts etc.) store much more than that, with a typical font containing thousands of lines of vertex-adjusting bytecode programs (hence the name “font program”), multiple encoding tables, a vast amount of miscellaneous font metrics, etc. This is what this post is not about.

For our HTML conversion, we had to solve the following problem: How do you encode an arbitrary polygon into a font glyph for use in a @font-face declaration? The answer is font repolygonization, which is the process of optimizing the polygons in a given font. It turns out that to repolygonize a font properly, you first have to know a thing or two about fill-rules.

Polygons in computer graphic generally come in two “flavors”: even-odd-filled or nonzero-filled. These two schemes use different semantics to determine whether a given point is inside or outside of the glyph. For even-odd, every line in the polygon separates something on the inside from something on the outside:


Even-odd filled glyphs

For nonzero, on the other hand, the direction of the line matters. Two lines in the same direction mean the area after the lines is still filled; drawing a hole into a polygon needs to be done by a line going into the other direction:


Nonzero-filled glyphs (with direction indicators)

For even-odd polygons, the segment direction doesn’t matter, but for nonzero-filled polygons it’s also necessary to draw segments in the right direction. Truetype, and embedded OpenType (eot) fonts use the nonzero fill style. For .svg fonts, you get to choose (see the fill-rule property). So to properly encode any polygon as font glyph, we need to “fix” the fill-rule.

Another thing that’s important for properly encoding a Truetype font is that all segments need to be continuous. Depending on where the source polygon comes from, this isn’t necessarily the case. In some vector graphic formats (e.g. SWF), polygons can easily be drawn in an arbitrary order and still define a filled shape:


degenerate glyph (even-odd filled)

We found that while we’re recoding and fixing a polygon, a convenient way to store the end result is as a hybrid polygon which is, at the same time, even-odd as well as circular:


Self intersected, both even-odd as well as nonzero-filled glyphs

At Scribd, we greatly rely on this hybrid approach when repolygonizing fonts in order to be able to produce output files for both our Flash and HTML reader platforms at the same time. “But wait!” you cry, “you said you generate HTML fonts from fonts stored in PDF files? Why aren’t those already encoded with the right fill-rule and continuous segments?” Good question.

First of all, we simplify font polygons, thus potentially breaking the type of fill-rule. Secondly, some “fonts” in a PDF are no more than a vector drawing in font’s clothing. Finally, font characters might get overlapped, transformed, combined, intersected etc. during conversion.

So how do you convert a polygon to a different fill-rule? This gets us back to the polygon intersection of those 150,000,000 polygons per day. You intersect the polygon with itself and relabel them at the same time with a fill-rule of your choice. We actually not only intersect the polygon with itself but also with various elements like clipshapes in order to remove invisible or half-hidden elements on the page.

In theory, this is a rather simple approach: For every character element to be drawn on the page, intersect that character’s polygon with itself and all other page polygons it possibly interacts with, then store the result of that intersection in the font. There are two perhaps non-obvious problems, though:

  1. This polygon intersection needs to be fast. We process hundreds of thousands of pages every day.
  2. It needs to be very stable. We want to introduce zero display errors.

So while long processing speed can easily be countered by buying some more fancy hardware, getting a polygon intersection to be stable is something entirely different. For example, here are some classic pathological cases that need to be correctly handled. Notice that cases like these are actually the rule rather than the exception.

Horizontal lines throw off scanline algorithms, multiple intersections in a single point lead to subtle numeric errors, and intersections taking place on top of lines can introduce hairline fragments that will cause the rendering engine displaying these polygons to “bleed.”

While it’s possible to deal with each of these cases, floating point arithmetic can actually cause them to occur after or during the computation: For example, if you add a point to a polygon segment because of an intersection, the resulting small change in the segment slope can cause it to overlap another segment it was previously disjunct from.

There are basically three ways to deal with this precision problem:

  • Introduce some random perturbation to all polygon endpoints. This nicely takes care of horizontal lines and three point intersections, and makes the probability of floating point errors introducing extraneous overlaps very small. However when concatenating intersections the perturbations tend to accumulate. It’s possible to counteract this with symbolic perturbation, but still, this is cheating. We wouldn’t be solving the problem, only make it harder to reproduce.
  • Work with infinite precision. In other words, do away with floating point errors by introducing enough bits in the numeric representation of the coordinates so that we never have to do any rounding. This approach has been verified in practice (see e.g. Fortune & Wyk). The main problem is going back from the infinite precision representation to the finite precision final version (which we need to actually store the data).
  • Work on an integer grid, and use snap-rounding. Don’t let the floating point numbers lure us into the security of false precision, and explicitly deal with all special cases that can occur.

Let’s look at the last item in more detail. So in the example of these glyphs:

A snap-rounded version would look like this:

You see that snap-rounding doesn’t just reposition all polygon endpoints, it also puts all the intersection points on the integer grid. Also all lines that come into the vicinity of a used grid point are routed through that grid point as well. This way, no polygon edges can ever come to lie too close to the vertex of another edge. And, while snap-rounding causes things to look blocky in the example above, this is because we choose a very coarse-grained grid for illustration. In our production system we operate with a grid spacing that is a tiny fraction of a pixel (or for glyphs, a tiny fraction of an em square).

Intersecting two polygons while at the same time grid-rounding the results also uses only slightly more computation time than standard scanline algorithms: while a Bentley & Ottman scanline sweep needs O(n log n + k log n) time, snap-rounding can be done in up to O(E log n), with E the description complexity of the crossing pattern which for typical polygons is of order (n+k). The nice thing about intersection using grid-rounding is that it’s rock-solid. There’s no corner-case that’s not explicitly handled, and overcomplicated intersection patterns automatically simplify themselves. If you’re interested in the nitty-gritty details of our implementation, we basically used the algorithm from Hobby with a few additional ideas from Hershberger and Bhattacharya et al in order to produce a grid-rounded glyph polygon intersection in a single scanline pass.

Also, even though we process those hundred million polygons a day, only 10% of document conversion time is actually spent on that. All the rest goes into image processing. We’ll talk about that in one of our next posts.

—Matthias Kramm

Coming SoonPlan B: Font Fallbacks

The Perils of Stacking

This is the second of a four-part series on the technology behind Scribd’s HTML viewing experience. You might like to read part 1, “Facing Fonts in HTML” and part 3, “Repolygonizing Fonts,” if you haven’t already. Part 4, “Plan B: Font Fallbacks” is coming soon.

A document page, unlike an image, isn’t really just a two-dimensional thing.

It’s not until you’ve been forced to dig into the internals of the PDF format that you come to appreciate the rich structure an innocent looking document page gives you. Vector fills, gradient patterns and semi-transparent bitmaps fight over dominance in the z-order stack, while clip-polygons slice away whole portions of the page, only to be faded into the background by an omnipotent hierarchical transparency group afterwards. So how does one convert this multitude of multi-layer objects into an html page, which is basically nothing more than a background image with a bit of text on top?

To understand the problem better, here’s a stacked diagram of a page:

At the bottom of the stack we have a bitmap (drawn first), then some text, followed by vector graphics, and finally another block of text on top. We don’t currently support vector graphics in HTML (stay tuned …); instead, we convert polygons to images which presents us with the challenge of finding a z-order of bitmaps and text elements that preserves the drawing order of the original page, while also simplifying the structure.

An optimal solution of transforming the above document page into a bitmap/text layering might look like this:

You see that here we merged two images into one even though they were not adjacent in the rendering stack, by using the fact that the text between the two images didn’t intersect with both of them.

This was a simple case where it’s actually enough to put one solitary bitmap at the background of a page. It also may happen that you have to put transparent images on top of the text (i.e, give them an higher z-index value). Notice that this requires the IE6 transparency hack.

In order to figure out whether or not an element on the page shares display space (i.e., pixels) with another element, we keep a boolean bitmap around during conversion:

Element on page Corresponding boolean bitmap

This bitmap tells us which regions of a page currently have been drawn by e.g. polygon operations, and thus which pixels need to be checked against new text objects in preceding layers. In fact, we actually keep two of those bitmaps around; one for keeping track of the area currently occupied by the next bitmap layer we’re going to add to the display stack, and one for keeping track of the same thing for text objects.

There’s an interesting fact about this approach: As long as the things drawn so far onto a bitmap and the html text fields don’t overlap, we’re free to chose the order we draw them.

In other words, it’s not until we’ve drawn the first object intersecting with another layer that we decide which of those two layers to dump out to the page first.

Here’s an example of a document page being rendered step by step:

The background image is put on the lowermost layer. Notice that the background also contains graphical elements for the equations on this page:

This is the text layer, consisting of normal HTML markup using fonts extracted from the document:

The two layers are combined to produce the final output:

Take a look at the actual technical paper as converted. And of course, if you want to see your own docs htmlified, just upload them to Scribd.com!

—Matthias Kramm

Next: Repolygonizing Fonts

Facing Fonts in HTML

This is the first of a four-part series on the technology behind Scribd’s HTML viewing experience. You might like to read part 2, “The Perils of Stacking” and part 3, “Repolygonizing Fonts,” if you haven’t already. Part 4, “Plan B: Font Fallbacks” is coming soon.

PDF to HTML converters have been around for ages. They take a document, analyze its structure, and produce (with the help of some images) an HTML document that has roughly the same text at roughly the same position. This is fun to play with, but not really useful for practical applications since fonts, layout and style are often very different from the original.

With the new Scribd HTML document conversion, we aimed for a higher goal: Let’s create a version of a PDF file that looks exactly the same, but is also completely valid HTML. It turns out that since all major browsers now support @font-face, this is actually possible.

Encoding documents in this way has numerous advantages: no proprietary plugins like Flash or Silverlight are required to view documents; we take full advantage of built-in browser functionality like zooming, searching, text selection, etc.; state-of-the-art embedded devices are supported out of the box; and even on older browsers it degrades gracefully (to HTML text with built-in fonts).

So let’s back up a bit and review what @font-face is:

Font face is a way of making a browser use a custom TrueType font for text on a web page. For example, this text uses the Scrivano typeface:

font-face

Try selecting this text and right-clicking to assure yourself that this is indeed just standard browser element

This is accomplished by introducing a @font-face css declaration in the page style block:

@font-face {font-family: Scrivano;
            src: url("http://someserver.com/scrivano.ttf");}

Now you can use the “Scrivano” font family as if it was a built-in font. Before we dive into how one can use this for converting documents to HTML, there’s a caveat: in order for @font-face to really work, we need to store three font files: One for Internet Explorer (.eot) one for embedded devices (.svg) and one for Firefox, Safari, Chrome et al (.ttf). Once this is done, however, @font-face works in pretty much every modern browser, including the ones that come with mobile phones (iPhone and Android). So this is how the above looks, fully-fledged:

@font-face {
  font-family: 'Scrivano';
  src: url('scrivano.eot');
  src: url("scrivano.svg") format('svg');
  src: local('\u263a'), url('scrivano.otf')
  format('truetype');
}

(For a detailed discussion about the \u263a hack in there, see Paul Irish’s bulletproof font-face syntax)

Now, let’s talk about documents again. As in most documents, text is the predominant element, making @font-face invaluable for transcoding PDF files to @font-face-enabled HTML. Of course, the fonts we need for displaying HTML text as closely to the original as possible need to be constructed from scratch: PDF documents store fonts in a myriad number of formats (Type1, Type3, OpenType etc.), none of which is directly usable as a web font. Additionally, the same font may be used in different places with different encodings or even different transforms.

I don’t want to bore anyone with the specifics of decoding arbitrary PDF font and build a TTF, EOT and SVG out of it. However, the transformed font variant is actually fun to talk about. Diagonal text in documents is surprisingly common; let’s look at, for example, this figure with diagonal axis descriptors, from a document about fractals:

How do you encode the diagonal text in this document in a HTML page?

Short of using element transformations (-moz-transform, DXImageTransform etc.) which we found to be rather impractical, we encode the above HTML with a custom font created by transforming the original font. Here’s how our generated font looks in FontForge:

From the above font screenshot you also notice that we reduce fonts to only the characters that are actually used in the document; that helps save space and network bandwidth. Usually, fonts in the pdfs are already reduced, so this is not always necessary.

Naturally, for fonts with diagonal characters every character needs to be offset to a different vertical position (we encode fonts as left-to-right). In fact, this is how other HTML converters basically work: they place every single character on the page using a div with position:absolute:

<!-- crude pdf to html conversion -->
<div style="position:absolute;top:237px;left:250px;">H</div>
<div style="position:absolute;top:237px;left:264px;">e</div>
<div style="position:absolute;top:237px;left:271px;">l</div>
<!-- etc. -->

At Scribd, we invested a lot of time in optimizing this, to the degree that we can now convert almost all documents to “nice” HTML markup. We detect character spacing, line-heights, paragraphs, justification and a lot of other attributes of the input document that can be encoded natively in the HTML. So a PDF document uploaded to Scribd may, in it’s HTML version, look like this (style attributes omitted for legibility):

Original document:

HTML version:

<p>
  <span>domain block is in a different image than the range block), as</span>
  <span>opposed to mappings which stay in the image (domain block</span>
  <span>and range block are in the same image) - also see Fig. 5.</span>
  <span>It's also possible to, instead of counting the number of</span>
</p>

Together with <img> tags for graphic elements on pages, we can now represent every PDF document in HTML while preserving fonts, layout and style, with text selectability, searchability, and making full use of the optimized rendering engines built into browsers.

Want to see it in action? Check out this technical paper, browse our featured documents or upload your own!

-Matthias Kramm

Next: The Perils of Stacking

The Future of Reading is Open

Today, Scribd is changing the way you read documents online. Over the next few weeks and months, Scribd will convert our entire content corpus — tens of millions of documents, books and presentations — into native HTML5 web pages so that we can offer the best online reading experience. Scribd documents in HTML5 load instantly, support native browser functions (zoom, search, scroll, select text), and deliver an impressive reading experience across all browsers and web-enabled devices, without requiring add-ons or plug-ins.

What does this mean? It means that any document in any format (MS Office Docs, Google Docs, PDF, ePub) will now be readable on any device with a browser. This move — nearly six months in the making — represents billions of pages of content that will become part of the fabric of the web. To find out more, read this presentation:

http://www.scribd.com/html5

The Scribd Open Reading Platform: The future of reading is open. Join us.
-Jared Friedman, Scribd CTO and co-founder