The next ten years

adhesionSut

I measure the growth of typeface design by the questions of border control agents.

A decade ago, the phrase ‘I am a typographer’ would trigger a subtle move of the hand towards the ‘dodgy traveller’ button (just in case, you understand). The agent would relax once I confirmed that I was indeed in the mapping business. But in the last few years – three or four, no more – things are different. Sometimes I even drop the words ‘typeface design’ without expecting to meet the agent’s supervisor. And, in a growing number of cases, agents will tell me the name of their favourite font, and that they got a book called Just my type for Christmas.

Typefaces becoming part of the mainstream is neither accidental, nor a fashionable blip. It was foreseeable many years ago, and has been accelerating under the dual impetus of the accelerating move to a standards-compliant, text-orientated internet, and the growth of mobile devices with usable browsers.

Designers who remember the last decade of the twentieth century will recall the shift from intensely localised markets, with only superficial communication, towards connected regions. The European integration project, from 1992 onwards, followed by the surfacing of the internet onto the mainstream three years later, required fonts that could support a growing number of languages (albeit primarily those written left-to-right, with unconnected letterforms). Fast-forward a decade, and the typefaces on pretty much any up-to-date computing device could render most scripts in the world, even if the more complex writing systems still suffer in fidelity and design range. The two technologies responsible for uniting the world typographically, Unicode and OpenType, are now in a stage of maturity and refinement, covering most of the needs of most readers.

The core typefaces shipping with an operating system, or a smartphone, or Adobe’s applications, are a good litmus test. Most have well over 2,000 glyphs in each font, with many additional glyphs for stylistic sets like small caps and non-lining numerals, across the Cyrillic, Greek, and extended Latin scripts. Other typefaces cover Arabic, Armenian, Ethiopic, Hebrew, a whole range of scripts for India, and a growing number of scripts for East Asia: from CJK (Chinese, Japanese, and Korean) to Thai, Khmer, and Burmese. All these resources establish a base level for servicing most texts. It is now very likely that there is some typeface that will render almost any language, and possibly more than one, in different styles. But there are compromises: even if there’s more than one typeface, styles may not match across scripts, and the range of type families are generally uncoordinated. The profusion of styles, widths, and weights of the Latin script is only partly met in other European ones, and far less so in global scripts.

This state ensures basic communication, but is not very helpful for graphic designers and typographers working with global brands, multi-script documents, or with complex applications exclusively in non-Latin scripts. Communications professionals (in disciplines including, and beyond the obvious candidates of education and publishing)  need a wide range of typeface styles to express the complexity of a publication or a brand, and they need the right style in different weights, and widths, and so on. And this is why typeface design is growing, with no sign of abating: a triple combination of growing global brands, a migration to screens of documents with long print traditions (from ebooks and interactive school textbooks on tablets, to local news services replacing traditional newspapers), and a growth of personalised, transactional documents like online shopping catalogues, increasingly on mobile browsers. At the same time, niche print publications are growing: they take up the slack of offset press capacity, but they also thrive in print runs of a few hundred, a traditional no-man’s land that digital presses have opened up. These conditions, of transformed documents and new platforms, push the demand for ever more typefaces that are typographically rich, wide in script coverage, and tailored for use on a wider range of environments: not just different surfaces (screens, print-on-demand, and traditional presses) but also different canvases: spreads, pages, and columns of hugely variant sizes, each with its own demands on line density, contrast, and spacing.

Two factors add substantially to this need. Firstly, the explosion of mobile networks in regions where cable-based broadband is scarce, means that critical communications are restricted to smaller screens that render primarily text. Secondly, the speedy adoption of tablets, which are agnostic devices that do not convey any functional aspects of the documents they render. (In other words, the devices do not explain the interaction, like a print document does. The navigation arises from the document’s typographic design, not its material qualities.) The four main tools of typographic design become the main carriers of any identity everywhere: typefaces, spacing, visual hierarchies, and colour are the only reliable identifiers.

This process has precipitated a radical re-thinking of a typeface designer’s skillset, especially with respect to scripts the designer is unfamiliar with, and most probably cannot read fluently. In such cases, designers need to engage with the characteristics of the script, bringing to the table an understanding of how letterforms are influenced by changes in type-making and typesetting technologies. But just looking at a bunch of local documents is not enough. Designers need to bring an appreciation of the typographic conventions for the genre of documents in each culture. In response to these demands, the best typeface designers integrate research in historical and contemporary artefacts: books and ephemera, type-making and typesetting equipment, but also texts and material such as drawings produced during the type-making process. These combine with a study of texts written by type makers about type-making, designers about their practice, and a range of research texts on the development of typeface design. The key for all these to be included in a commercial schedule is a framework for integrating research into design practice that enriches the designer’s understanding, and unlocks informed creativity.

The weight of methodology and research place multi-script typeface design at odds with art school traditions of design education. There is, quite simply, too much to learn in areas touching on history, linguistics, and technology for self-taught professionals, or the informal osmosis of apprenticeship-based courses. And, rather than be seen as an oddity in the design world, typeface design is leading a gradual shift in the wider design education sector. Notions of clarifying a body of field-specific knowledge, and formulating a methodology for practice that is transferable across schools and regions are taking off, globally. (Increasingly, I am invited to speak on exactly that subject: how to develop a research-informed, culturally sensitive methodology for teaching that educates potentially excellent professionals. And promotion criteria for design educators worldwide are changing to include research-based outputs, moving design closer to the Humanities than the Arts.)

The growth in books and print magazines dedicated to typography, as well as special sections in general interest titles, are just one of the signs of typography maturing. The many conferences, workshops, and exhibitions are another – and they are aimed not only at typographers, but at web designers, brand designers, and graphic designers alike. But there is another, more subtle indicator that typography and typeface design are gradually emerging onto the wider consciousness.

As typeface families grow to cover multiple scripts, concepts of national and regional typographic identity become current, and often volatile. New typefaces can reflect both home-grown and imported visual trends; they give concrete form to the expression of community identities, and become inflection points in visual culture at a range of levels. Beyond functional requirements, they can embody political and generational shifts, and encapsulate a society’s dialogue with modernity. And it is exactly on this front that typeface design will be most visible, and relevant: in enabling this dialogue between different approaches to text-based communication, and making visible the tension between different traditions and ways of thinking.

Next time I cross a border, I’ll have a longer tale to tell.

Thoughts on a quiet master

Fifteen years after the centenary Monotype Recorder (published in time for the ATYpI 1997 conference in Reading) the title was revived, this time to celebrate the four-decade long career of Robin Nicholas. There were copies available at the shop of the Pencil to Pixel exhibition, but they have not yet circulated worldwide. This text was my contribution.

 

Forty years is a long time, by any measure. In typography and typeface design, the last forty years are aeons, encompassing tectonic shifts in the industry. From stable, verdant summers to tumultuous storms and heavy winters, to hesitant then blooming springs, the type industry has seen more change and upheaval in the last few decades than in its previous centuries combined. We know of large companies that ceased to exist, others that transformed into very different entities, and of new ones – small and sometimes getting larger – that are growing into a new environment. And yet, the core principles of typeface design somehow persist, mutate, adapt, and survive the changes.

The company that Robin Nicholas joined is probably the most interesting, and certainly the most persistent survivor. At the time of his joining Monotype as a young draftsman, the industrial behemoth smelting heavy but precise machinery for dispatch to a global market must have seemed like a mountain of granite, immovable through the scale of its commerce and the confidence of its technology. Indeed, expanding his letter-making skills from two to three dimensions in his first years pointed to techniques handed down to apprentices for centuries before his time. And yet, before long, accelerating changes started introducing new ideas in typemaking. The swishes and squirts of pumps gave way to the clicks of flashing lights, and soon after just the hums of cooling fans and the low buzz of electronic devices. The process of making letters lost a dimension: from drawings to ink, the miniscule ridges and valleys of carefully cast metal gave their place to letters cut out of light (more changes were yet to come). New collaborators brought very different skillsets into typemaking, to be replaced in their turn by a dispersed, localised, and by comparison chaotic community. This story is often told, and well-known by all typeface designers and typographers, and we do not need to dwell on the details. Yet, it is only marginally a story of typeface design: it is one of typesetting, of documents in front of readers’ eyes. These documents responded to the seismic industrial upheaval by filtering typography through the new technologies: a great part of their conventions survived, at the level of the letter, the word, and the paragraph. Many more changed, most visibly in layout, and document structure. Primarily, and from our point of view, the technological shifts have been a story of ideas about typefaces surviving across technologies, like a vessel floating from a river to an estuary to an open sea.

Robin’s career has almost entirely coincided with these fundamental transitions in the way typeface designers capture their intentions, and then encode them for production. Precise paper drawings for the letter shapes, specific for each range of sizes, survived across technologies more than people would expect. The delicate pencil outlines captured details in ways that it would take decades for screens to match, even if they gradually lost a lot of the formality of their earlier years (for a brief period of time making beautiful rubylith transparencies the apogee of flat encoding: in many ways, the most pristine form that letters have ever been stored). Downstream of the sheets of paper, however, the story is very different. Once drawings could be stored on a computer, it is not just the direct relationship to the rendered sizes that is missing (away from a punchcutter’s desk, this was never really present).
Letter shapes stored as digital constructs abstracts them from a rendered reality, and makes the drawn letters only a beginning in a process of shape manipulation on a scale that twisting lenses or rheostats could not begin to hint at.

These transitions placed unique challenges for a designer whose career spanned such changes. Superficially, the designer would participate in changing the ways of doing things: new units of measurement, new equipment down the line, new production processes. More visibly, a new class of professionals joined the companies, with a language for describing the making and rendering of letters that would seem alien only a couple of decades in the past. But fundamentally, the changes in typesetting technologies forced a reflection on the key skills of a typeface designer. At the beginning of Robin’s career it would be easy to assume that the typeface designer was inseparable from the making of pencil renderings on paper. The only distinction one could make would be between the degrees of seniority (a draftsman, a designer, the Head of the Drawing Office), to which different levels of privilege for making changes would be assigned. But from when the locus of the designer’s decisions became fluid and transferable across type-making technologies, the contribution of the designer needed to be more carefully articulated. A – not at all rhetorical – ‘what is it that I am really adding to this process?’ has been central to deliberations on typeface design from the mid-sixties onwards (neatly overlapping with Robin’s career). The loss of directness makes this a critical reflection: letters are not anymore perceptible in any approximation of their true form as they travel though each process, but only witnessed indirectly though human-friendly compromises – as every digital technology demands.

Faced with this question, the typeface designer will invariably return to the fundamentals that survive the technological shifts: the designer’s craft involves decisions about typographic patterns and shape details at a level abstracted from the encoding of the shapes, and the mechanisms of rendering. In other words, the designer imagines an idealised visual and intellectual experience by the reader that is derived from a particular typeface, and will strive to make this a reality through – and around – the technology at hand. Robin’s work offers some particularly good examples of this three-way dialogue between the designer, the ideal model of a typeface, and the technology used to capture it, none more so than the revivals of historical types. Is a true Bell, a Centaur, a Fournier, a Janson, a Van Dijk, a Walbaum one closest to the imprint of metal types in the sources – and, of those types, which? In these we see a masterly distillation of a whole range of possible appearances into a definitive set of shapes that have defined typographic reference points. Such digital classics defined not only mainstream textures for typographers and readers, but also an indirect basis for the digital explorations of the 1990s, which found a voice by negating the refreshed historical models. And the approach to the chameleon that is Dante, and the revisitation of Bembo in Bembo Book, show an exceptionally delicate adaptation of style to the digital medium. (Although we must admit that not even Robin’s skills can help the historical accident that is Pastonchi…)

Next to revivals, the other big challenge for typeface designers is a very tight brief for a text-intensive design, of which none more so than newspaper typefaces. These must meet extremely high functional parameters, in tight spaces and with requirements of economy and stylistic discretion that make the achievement of a distinguishing identity the typographic equivalent of hitting a bullseye blindfolded. Yet, the longevity of Nimrod, whose combination of gently swelling terminals and deep arches on the x-height, with an light, strong baseline set a much imitated pattern: directly in new designs even thirty years later, but also in hints that when applied to Scotch Romans updated a style that is one of the dominant styles for text typography to this day. The same pedigree of absolute control of a dense texture (and a familiar clarity in the horizontal alignments) can be seen in the more recent Ysobel, which updates the style with a more self-indulgent italic. Ysobel’s italic is not only a response to rendering improvements in news presses since Nimrod, but also an endorsement of the contemporary rediscovery of the potential of italics in text typefaces, and the gradual abandonment of historical models for the secondary styles.

Whereas revivals and text-intensive typefaces are most illuminating of the designer’s skill, Robin’s work with typefaces for branding and OEMs testify to a side of his work that is not possible to list in a type specimen. For those of us that have had the pleasure of working with him, Robin exemplified the quintessential collaborator: he combines mastery with humility, and confidence with a sincere willingness to discuss a design, and share his expertise. At the heart of his approach lie a deep respect for his fellow designers, and constant striving for learning and, in turn, contributing to the discipline. (I remember fondly sitting with Robin over a stack of printouts with an OEM Greek typeface, our discussion taking us from the shapes in front of us to a pile of books and specimens that would help us answer why a particular treatment of a bowl is true to the style and appropriate to the brief, rather than just formally pleasing.)

This combination of openness and respect for the discipline of typeface design points to two key aspects of Robin’s work, not directly related with any shapes he made. First, his nurturing of several designers that worked under his supervision at Monotype. And secondly, his dedicated efforts to support education in typeface design, not least through his involvement with the masters programme at Reading. As an External Examiner, Robin has directly influenced the direction of education in typeface design; as an advocate for the concrete support of promising applicants he has helped change the lives of a small number of people in very real terms.

I am leaving for last an area of Robin’s contribution that perhaps few people outside the company know much about, but has been paramount in supporting an extremely important trend, as well as foreground the unique nature of Monotype. Through his engaged stewardship of the Monotype Archive in Salfords, Robin has enabled numerous researchers in their work in Latin and non-Latin scripts. This has had a critically beneficial effect in the typefaces designed, and – even more importantly – in the documentation that is available to the next generation of researchers and designers. It is no understatement to say that Robin’s support for the integration of archival research into our postgraduate projects is benefiting in fundamental ways the skills of the younger generation of typeface designers from Reading, and, though them, the appreciation of a research-informed approach in the wider typeface design community. (We should note that Robin is far from alone within Monotype in his active support of education and research: the company is highly sensitive to the unique legacy it holds, the real value for contemporary design of the knowledge embedded in its archives, and the benefits of supporting students at a range of levels.) It would be remiss of me to omit Robin’s involvement with the first Knowledge Transfer project between Monotype and the University of Reading. The project, which demonstrated in concrete terms the value of the Archive in Salfords for the development of new typefaces for the Indian market, captured a key moment in the globalisation of typeface design and the shift towards screen-based texts, and, specifically, mobile devices. The project also enabled a marketing narrative of differentiation based on concrete and deep expertise spanning decades; arguably Monotype is the only active company able to make that claim with regard to the support of non-Latin scripts.

I hope that I have done justice, in the limited space here, to Robin’s long and diverse career. I have attempted to paint a picture of a consummate professional, adaptable to the conditions of his industry, reflective about his practice and the fundamentals of his discipline; an enlightened collaborator, keen to share expertise and support the growth of a younger generation of professionals; and – crucially – a Type Director with a clear vision about protecting and promoting the unique legacy of a very special company, actively engaging with research and education in ways that influence the future of the discipline. For all these, typeface design owes Robin Nicholas a debt of gratitude.

Monotype Recorder

In search of the digital Cresci (1999)

The text below is a pre-publication version of an article that appeared in Information Design Journal, in 2000. Although somewhat dated, it is useful for typeface design students. It picks up references from a cluster of texts from four issues of Visible Language that form part of the sources for one of the seminars in the MATD and MA (Res) TD:

Donald E Knuth, ‘The concept of a Meta-Font.’ In vol XVI, no 1, 1982

Douglas R Hofstadter, ‘Metafont, metamathematics, and metaphysics.’ And:
[various] ‘Other replies to Donald E Knuth’s article, “The concept of a MetaFont”.’ In vol XVI, no 4, 1982

Sampson, Geoffrey, ‘Is Roman type an open-ended system? A response to Douglas Hofstadter.’ And:
Douglas R Hofstadter, ‘A reply from Douglas Hostadter.’ In vol XVII, no 4, 1983

Donald E Knuth, ‘Lessons learned from Metafont.’ In vol XIX, no 1, 1985

 

**********************

No review of digital typography is complete without a long stopover in Don Knuth’s neighbourhood. A mathematician based in Stanford, California, Knuth is primarily active in the field of computer programming. During the mid-1970s, out of frustration with the quality of phototypeset mathematics, Knuth was driven to address the problem of at least matching the standard of hot-metal Monotype output. [1. Knuth’s first two volumes of The art of computer programming (Reading, MA: Addison-Wesley, 1968 and 1969 respectively) were typeset on Monotype machines. A good source on the standard achievable by hot-metal composition is the winter 1956 issue of the Monotype Recorder (vol. 40, no. 4), which was devoted to the setting of mathematics with the Monotype 4-line system.] The product of this endeavour was Tex, a versatile and powerful typesetting system which outputs device-independent documents. Alongside Tex, Knuth developed Metafont, [2. Knuth typesets Tex and Metafont as TEX and METAFONT respectively through-out the book. Here the words are treated as software titles, not logotypes.] a system for generating typeforms. (The term “typeform” is used to signify the rendering of a typographic character, therefore a mark intended to function in conjunction with other marks with which it forms a collection (normally a typeface), without prior knowledge of the context of its use. On the other hand, a letterform is a one-off mark produced for a particular context, e.g. a manuscript or a piece of calligraphy.) From early on Knuth made both systems freely available, and it is not an overstatement to say that Tex has transformed the production of scientific writing. Tex users number in the tens (if not the hundreds) of thousands, and it will be a rare math-intensive paper that is not so typeset.

Digital typography, published in 1999, is the third in a planned series of eight books of Knuth’s published works, together with some new material. It is a hefty 680 pages, comprising 34 papers and articles, including the transcripts from three relatively recent question & answer sessions. The majority of the contents has been previously published in TUGboat, the journal of the Tex Users Group. Knuth has already begun the process of identifying errata; characteristically, readers who contribute can look forward to a reward of $2.56. (The list is updated at Knuth’s Stanford page.) To his credit, not only is the prose very readable, but the mathematical exposition managed to flatter this reader that half-forgotten business studies algebra was up to the task of digesting the arguments. However, for reviews on the programming content, and opinions on literate programming (an area of Knuth’s activity to which he attaches great importance), readers are advised to turn to TUGboat vol 20, No 2, 1999: its editor has announced reviews by Knuth’s fellow computer scientists.

At one level, the book is an archival collation of technical papers and notes; at another, it is a source of pertinent ideas and fascinating suggestions – especially so when addressing the nature of letters and typeforms. Inevitably, the more technical chapters will make non-specialists feel they are eavesdropping on a conversation having missed the key remark. Even so, reading the early projections back-to-back with the mature re-evaluations (mostly through the question & answer transcripts) sheds a revealing light on the development of a significant body of work.

The papers fall in two broad categories, Tex and Metafont, with a few further items on rasterization. Of the two main themes, Tex is the more significant, if less interesting – a corollary of its undisputed status: the value of Knuth’s contribution to electronic typesetting is as significant as the superiority of Tex’s line-breaking algorithms over pretty much anything else available now, let alone twenty years ago. Indeed, it is only this year, with the launch of Adobe’s InDesign that we get a ‘multi-line composer’ – a line-breaking algorithm that monitors several lines before fixing line-breaks. Adobe properly acknowledge the credit, [3. See ‘Adobe InDesign in depth: text and typography’ pp. 3–4, 8.6.99. Adobe’s description of InDesign’s ‘multi-line composer’ is impressively close to Tex, and they even use the term ‘badness’ (a Tex trademark) in their documentation.] but, at the time of writing, it remains to be seen whether InDesign can match the typographic texture Tex can achieve.

Tex is based on the twin concept of approaching documents as lists of boxes joined by stretchable ‘glue’, and defining values of ‘badness’ for deviations from optimal spacing values. Knuth repeatedly mentions that the potential of these simple premises was not fully foreseen. Indeed, a non-Texpert looking at the typesetting complexity of material produced with Tex cannot but admire the elegance and economy of the concept. In this respect Digital Typography is a showcase item – and if the typesetting of the mathematical matter is more obviously impressive, the evenness of the texture in less extravagant pages sets a subtle benchmark. (The only gripe in this department being a propensity for too-wide, almost double, spaces after full stops – an annoyingly persistent legacy of typists, which is avoidable in Tex.) Indeed, the combination of quality printing on good paper and the effectiveness of Tex is enough to render the singularly unattractive Computer Modern typeface used for most of the book digestible, by any standards no mean feat.

By far the more interesting parts of the book are the chapters on the inception, development, and use of Metafont. Particularly enjoyable for documenting the evolution of a design is the chapter on AMS Euler, a typeface family for mathematical typesetting that Knuth developed in association with Hermann Zapf for the American Mathematical Society. [4. One cannot help but think that Zapf, probably the best known representative of the calligrapher-as-type-designer approach, was the ideal choice for a meta-collaborator: Zapf’s technique lends itself readily to interpretation in the Meta-font idiom. It is tempting to speculate on developments had Knuth collaborated so closely with a type designer from a punchcutting or drafting background. It is, however, interesting to compare other documented uses of Metafont for large projects (see, for example: Southall, Richard, ‘Metafont in the Rockies: the Colorado typemaking project.’ In Roger D. Hersch et al (eds.), Electronic publishing, artistic imaging and digital typography. Berlin: Springer Verlag, 1998, pp. 167–180, where the emphasis was on control of the rasterized output).] Predictably, work on Metafont started as an attempt to address the practical problem of supplying typefaces for Tex – remember, this is long before the days of PostScript and TrueType, and digital typefaces suitable for mathematical typesetting were thin on the ground. Knuth’s original goal was ‘to find a purely mathematical way to define the letter shapes and to construct the corresponding raster patterns’. [5. Digital typography, p. 35] This statement can be something of a Pandora’s box, depending on whether one interprets ‘to define letter shapes’ to mean: ‘to codify an explicit description of a typeform (or a group of typeforms)’ – or: ‘to specify a process for the generation of new typeforms’. Throughout the book, one gets the impression that Metafont does the one thing, and its creator thinks (sometimes, at least) that it can do the latter, as if Knuth saw in Metafont more than the technology implied. In Digital Typography he recounts how he studied letterforms and saw regularities in the design, from which he realised that he ‘shouldn’t just try to copy the letterforms, but [he] should somehow try to capture the intelligence, the logic behind those letterforms’. [6. Digital typography, p. 607] One cannot but think that, at some point, Knuth must have made a mental leap from devising a description system for typeface families to a broader generic system for typeform description and generation. Perhaps it was his enthusiasm for letterforms that led him to such statements. In any case, this quote raises two fundamental questions: given that there is some ‘intelligence’ behind typeforms, is it possible to make it explicit? And, secondly, assuming that it is so, is it possible to codify this ‘intelligence’ in mathematical terms?

In any case, Knuth seems to have been aiming at a new approach for designing a typeface family, an approach that could ensure consistency in the design of various, not necessarily predetermined, weights and styles. (A goal that Adobe’s Multiple Master fonts have also sought to address – bar the ‘not necessarily predetermined’ bit.) The first version of the system, Metafont 79, defined ‘pens’ and ‘erasers’, and prescribed the trajectories between co-ordinates in two dimensions that these pens (or erasers) would have to execute in order to render each typeform. The dimensions and behaviours of pens and points were parameters definable by the user. A particular Metafont would be a collection of a finite set of parametric mark-makers and behaviours, each parameter assuming one of a potential infinity of values. In other words, using a single ‘master description’ infinite variations on a theme could be output. Key to this point is the fact that there exists not a singular, explicit collection of typeforms from which variations are extrapolated; rather, the specified parameters define a ‘design space’ within which any instance is equally valid as the next one. In essence Metafont-the-application is a system for the description of categories of typeforms; each Metafont-family is a classification system with a fixed number of ‘pigeonholes’ of infinite depth; each Metafont-typeface the compilation of a selection from each pigeonhole.

Knuth’s scientism was presented as the most recent chapter in the book that started with Felice Feliciano in the 1460s, continued with Damiano da Moyle, Fra Luca de Pacioli, and nearly twenty others, to reach a spectacular highpoint around 1700 in the engravings supervised by Phillipe Grandjean. Almost with no exception, these attempts at instructing on the ‘correct’ or ‘proper’ formation of letterforms (mainly capitals) were no more than fascinating red herrings of rationalisation. The most important exception to this trend was the Milanese scribe Giovanni Francesco Cresci, who pointed out the futility of his contemporaries’ propositions – and is in fact quoted in Digital Typography. But Knuth then does an about-face and writes: ‘Well, Cresci was right. But fortunately there have been a few advances in mathematics during the last 400 years, and we now have some other tricks up our sleeves beyond straight lines and circles. In fact, it is now possible to prescribe formulas that match the nuances of the best type designers’. [7. Digital typography, pp. 38–39] This remark can be interpreted as either ‘we can codify an existing design without any information loss’ (which is perfectly acceptable), or ‘it is possible to specify algorithms for the creation of letterforms’ – we should add the caveat: to Cresci’s standard. Neither of these interpretations is a correct description of Metafont, but the latter is closer to Knuth’s writing about it.

Metafont is a tool for creating typeforms in the same way that a chisel is for creating letterforms. A meta-designer will approach the computer with more or less pre-formed intentions about the general style of typeforms, if not a wholly clear notion of a specific instance of the Metafont he will be defining. He would then have to mentally translate his intentions into the Metafont grammar of pens, erasers, trajectories, and edges, and enter the appropriate code. And, as with all similar activities, we can expect the designer to undertake several proof-and-revision cycles until the result is deemed satisfactory. The meta-designer uses a computer to facilitate the expression in a usable format of a pre-conceived set of typeforms, in the same way as someone using Fontographer or FontLab: the concept of a typeform is visualised internally, then translated into a formal grammar understood by a tool, then entered in the tool’s memory for processing. For sure, as with all tools, Metafont will in some way affect this ‘double translation’ of the designer’s intentions. But to claim that Metafont aims in ‘the explicit implementation of the design ideas in a computer system’ [8. Bigelow, Charles, [contribution to] ‘Other replies to Donald E. Knuth’s article, “The concept of a Meta-Font”.’ In Visible Language, vol. XVI, no. 4, 1982, p. 342] misses the point that Metafont simply implements the product of the design ideas in a specific medium. What results from meta-designing is nothing more than the final trace of the process, not in any way a representation of the design process itself – let alone the ideas that generated it. Ultimately Metafont rests on two flawed assumptions: one, that by studying the finished product of designers’ work we could understand what was going through their minds, and isolate these intentions from the effects of their tools; and, two, that we could then express the range of these intentions in code for a computer ‘to carry out the same ideas. Instead of merely copying the form of the letters, […] to copy the intelligence underlying the form’. [9. Digital typography, p. 8]

What is important in type design? Type designers would say: patterns, relationships, the interplay of negative space and positive mass. A designer may intellectualise the design process ex post facto, but it is highly questionable that this process can be made explicit before the design is complete – indeed, it is safe to assume that it is largely crystallised during designing. To seek to represent the internal process of designing as the making of marks is to mistake a procedure for its motive.

Given that a considerable part of Knuth’s research was based on manuscript books and fine printing (that would, most probably, make use of oldstyle typefaces), it was perhaps not unexpected for him to adopt a model that replicated the rendering of calligraphic letterforms. However, the fluidity and potential for optical balance in formal calligraphy does not survive in Metafont. It is indicative that, in the process of refining the mathematical description of an ‘S’, Knuth chose Francesco Tornielo’s geometric description from 1517. Tornielo’s essay was one more entry to the list of the Renaissance geometric fallacies that started with Feliciano – one could add that Tornielo’s were among the less beautiful letterforms. Knuth follows and amplifies Torniello’s instructions, then ‘solves’ the problem of a mathematically correct representation of the ‘S’. However, even though the meta-S is far more detailed and flexible than Tornielo’s, it does not hold any information on what makes a good ‘S’.

Translation of Tornielo's instructions in Metafont

It could be argued that a procedural model that replicates an existing, non-computer-based activity does little justice to the potential of computers as design tools. A computer application for designing typeforms is unconstrained by lettering procedures. Shouldn’t this liberation suggest a new design paradigm? In its ductal rationale Metafont is suggestive of a digital version of Gerrit Noordzij’s scheme for description and classification of typeforms according to the translation and expansion of a rendering nib. [10. Noordzij, Gerrit, The stroke of the pen: fundamental aspects of western writing. The Hague, 1982. (A somewhat expanded version of this paper was published under the title ‘Typeface classification’ in the first volume of Bernd Holthusen & Albert-Jan Pool, Scangraphic digital type collection (edition 4). Mannesmann Scangraphic, 1990, pp 65–81)] But Noordzij has his feet firmly planted in the western lettering tradition, and approaches typeforms as inextricably linked to this background. Significantly, his analysis is exegetical, and not intended as a tool for the generation or manipulation of new typefaces. Furthermore, Noordzij acknowledges the presence of exceptions outside the possible design space (the ‘typographic universe’) of his system. This internal consistency is not obvious in Metafont’s approach, in which analysis of one idiom is presented as a tool for describing typeforms of any pedigree. In other words, Knuth adopted an idiom that seems foreign to his tools. Admittedly, the significantly redesigned Metafont 84 made provision for the definition of typeform edges, but Knuth still believed that the pen metaphor ‘gives the correct “first order” insights about how letters are drawn; the edge details are important second-order corrections that refine the designs, but they should not distract us from the chief characteristics of the forms’. [11. Digital typography p. 330]

For a type designer from a non-calligraphic background the ‘moving pen’ paradigm will probably limit the formal freedom of defining typeforms on a virtual plane. The designer will have to translate the contours and patterns in his mind to the grammar of Metafont, a process which is counter-intuitive – not to mention the possibility of the intended forms being unreasonably difficult to convert into Metafont. Moreover, type designers do not generally think in nebulous trends (‘some sort of serif about here’ or ‘a bit of a notch around there’) – they tend to approach new designs with a set of curves or patterns, in positive or negative space, that they then develop to construct the whole of the typeface. In this respect the flexibility afforded by Metafont in creating new designs is of debatable benefit.

Instances of the same meta-S

This issue highlights a common denominator in discussions of digital type: the relationship between the designer’s intentions and the typeforms that are finally rendered is not fixed. The designer will record disembodied – perhaps idealised? – typeforms as they would render under hypothetical, ‘perfect’ conditions, then edit the font data or enrich it with information to make the typeforms render as faithfully to the original intention as the technology allows. It is arguable that, to some extent, a similar process existed in all typeform-making technologies. However, currently available type design software depends heavily on this dissociation of intended and realised typeform. PostScript Type 1 fonts are the best example of this: dependence on a system- or printer-resident rasterizer for rendering means that as the rasterizer is upgraded the same font information may generate different output. In TrueType fonts rasterizing is primarily controlled by information in the font file itself, but the process of specifying this information is quite distinct from the design of the outlines.  Metafont is exceptional in that the typeforms generated from the user’s definitions of [groups of] typeforms are inextricably linked to the resolution of the output device. ‘Pens’ are digitised to the appropriate raster before rendering the typeforms. It is also possible for the user to define explicitly the raster frequency, as pixels-per-point or per-millimetre. In other words, the ‘enrichment’ is not a separate stage in font design and production, but an integral aspect of working with Metafont. It might well be that in this respect Metafont’s approach is truer to the digital world – or not?

{{work:4_dekmetafonts.jpg|The middle S in the sequence of the previous illustration with defining points highlighted. Notice the white regions in the counters, where “erasing” has been specified}}

There is another argument against Knuth’s scientism: Metafont fails the typographic equivalent of the Turing Test.  He asserts that ‘Metafont programs can be used to communicate knowledge about type design, just as recipes convey the expertise of a chef’. I would argue that neither is the case, but I am not a professional cook. To stick to the culinary analogy, however, Metafont can be seen as one of those multi-function tools that chop, grate, slice, and generally do faster and in some cases better all sorts of things that you could do by hand – but it does not join you at the table when the dinner is ready. Can we supply a computer-literate person with the Metafont-programs for a typeface family, and in any way expect them to develop an understanding of the original designer’s concepts for the design?

Indeed it was a designer and educator with far more experience than this writer that interpreted Metafont as implying that ‘the parameters of a design are more important than the design itself – that is: than the idea behind the design and how the face looks and reads’.  Here our attention is pointed to a fact that Knuth seems to overlook: typeforms are social animals. Type designers must marry their personal (culturally coloured) viewpoint with their speculations on the reception of their typeforms within their environment. Furthermore, it does not follow that the eventual interpretation of the typeforms will be the anticipated one. This cycle inevitably informs the design process. The changing appreciation of designs through time is witness to how the same typeform – the same procedure – can elicit different interpretations. (Does Akzidenz Grotesk carry the same connotations today as it did at the turn of the century?) Conversely, if the environment within which a typeform is created is foreign to us, Metafont’s ‘capture of the intelligence behind the design’ will not reveal the designer’s intentions. (If a contemporary designer works in a script we are not familiar with, could the Metafont files offer insight into which parts of typeforms were essential elements, and which were embellishments?) Since the computer operates in a social vacuum, it cannot understand what it’s doing. Not only is Metafont merely replicating a procedure, but its product is meaningless within the computer’s frame of reference: Metafont cannot judge whether the typeforms it generates are acceptable in the type designer’s environment. In other words, it is possible that an instance of a meta-typeface is valid within the Metafont-system (indeed, assuming a debugged program, it would not be produced were it not so), but not acceptable in the social context.

 

Now the reasoning behind the caveat about generating letterforms ‘to Cresci’s standard’ becomes obvious: the qualitative sufficiency of the outcome is not guaranteed by the otherwise valid implementation of a Metafont. Knuth has forgotten his brief recognition that Cresci had got it right. Like a true descendant of the 15th century deterministic letterers so enamoured of the ruler and compass, he defines a formula for determining the optimal curvature at a given point along a typeform. But what he comes up with is, inevitably, his own interpretation of a ‘most pleasing curve’. Clearly, each type designer has his own inner set of ‘most pleasing curves’, proportions, and patterns that he returns to in each design. It could probably be argued that the mathematical expression of curves constrains the range of possible ‘most pleasing’ curves for each circumstance. (W. A. Dwiggins might have had something to add to this. While he was satisfied with the Mergenthaler draftsmen’s transfer of his own drawings to blueprints for pattern-making, he did comment on the over-regularised Legibility Group typefaces. In copying Dwiggins’ drawings the draftsmen were only using their french curves to follow patterns that had been designed according to one person’s vision. On the other hand, the Legibility Group typefaces were designed in-house, and – despite C. H. Griffith’s supervision – were a collective effort. It is not difficult to imagine that in such an environment french curves would suggest patterns, instead of just follow them.)

The subjectivity of what constitutes a ‘most pleasing curve’ is borne witness by the variety of opinions on any type design. Despite any generalisations we may care to make, the optical compensation of shapes so that they look and feel correct, rather than actually measure so according to a geometric rule, is very much up to the designer’s judgement (and retina). It is this subjectivity which demands that every type designer goes through long printing out, looking at, editing, and starting-all-over-again sessions. It is the presence of these internal, deeply subjective ‘french curves’ that gives rise to the re-visitation by many designers of similar themes across quite different typefaces. In this respect, by drawing on existing cultural patterns and expanding on them to create new, personal interpretations, is it an exaggeration to compare typeface design to composing a piece of music, a case for type design as jazz?

When Knuth’s concepts first reached a wider, typographically-minded, audience the debate generated arguments that still resonate. At its best, Digital Typography is a source of provocative inspiration, an opportunity for debate that should not be missed. Its challenges should be heartily taken on by typographers, type designers, and educators alike. We may disagree with Knuth’s adherence to Tornielo’s ways, but his thoughts have brought us that little bit closer to our search for a Cresci for the digital age.

‘In search of the digital Cresci: some thoughts on Don Knuth’s Digital Typography‘. Stanford, CA: CSLI Publications, 1999. In Information Design Journal, vol 9, no 2 & 3, 2000, pp 111–118

Languages, scripts, and typefaces (2006)

[Response published in tipoGrafica no 70 (2006)

Do you consider that new technologies will enable those languages that have historically not been represented in font design, to incorporate the sounds appropriate to their tongues?

Hm… the question is somewhat misleading. The ‘languages that have historically not been represented in font design’ bit suggests that typeface designers who are native users of the other languages, the ones that have ‘historically not been represented in font design”, designed their typefaces with the sounds of their language in mind. This is not the same as saying ‘I can hear the words when I read’ or something like that; it means that the designer would have specific sounds in mind when designing a particular glyph. I’m pretty certain this is not the case; even more, I think the hypothesis runs counter to the basic mechanics of natural scripts.

Are fonts developed for specific languages? Even in the old days of 8-bit codepages, when each font file could take up to 256 characters, any single font allowed many languages to be typeset; the same font would do for English, Spanish, French, Finnish, and Italian, just as the same font with the declaration ‘CE’ following its name would cover Croatian, Czech, Estonian, Hungarian, Latvian, Lithuanian, Polish, Romanian, Latin-based Serbian, Slovak, Slovenian and Turkish (I think that’s all).

Such groupings (and there were plenty) were a combination of technical limitations (fitting all the characters in the font file) and commercial expediency: development, distribution, and retail channels. Each of these fonts claimed it could be used to typeset all these languages – and it did, offering a more or less adequate typographic representation of a script’s subset. I am choosing my words carefully here, because the point I am making is that typefaces offer interpretations of scripts, not languages.

We can shed some light on this distinction if we consider another distinction. So far I’ve been using the term ‘character’, but in fact this is not strictly correct. At the heart of contemporary applications and typefaces is the Unicode standard: a system for assigning a unique identifier to each character in any script ever used by humans. In this sense, ‘Latin small letter a’ and ‘Greek small letter alpha’ are characters, but ‘a’ and ‘α’ are not: they are glyphs: typographic representations of characters. In other words, all ‘α’s in all the typefaces in the world are the same character: Greek alphas, and all the ‘a’s are Latin [script] ays (how do you spell ‘a’?) – not English or Spanish or Finnish ays. To put it bluntly: the character implies a specification for the formal configuration of the glyph (relationship of positive and negative spaces) but is ignorant of the specific shape.

The relationship between character and glyph is, in my view, strongly analogous to that of a glyph and its voicing within a language. The Latin ‘a’ implies an ‘envelope’ of sounds within each language that is written with the Latin script, and a set of relationships of this sound with neighbouring glyphs. The leeway in speaking the glyph is, however, considerable; even to my unfamiliar ears a word such as ‘tipografia’ sounds very different when spoken by my Argentinian, Mexican, or Spanish students. Should they be writing with different glyphs for the ‘a’ in each language?

If, for the sake of argument, we posited that: yes, each of these three languages requires a different ‘a’ (or a different combination of ‘gr’, for that matter) then we must automatically decide what is the minimum difference in enunciation between the two variants that will trigger a choice one way or the other. Do we document the range of possible sounds that pass for ‘a’ in speech in each of these languages? This can quite quickly turn into a massive exercise in mapping speech patterns and deviations – the age-old classification problem of the infinite pigeonholes, the ‘duck-billed platypus’.

I can almost hear you say: ‘hold on there, you’ve missed the point! We should only be looking at each language in turn, not compare across languages!’ OK; but what will we do with dialects, regional variations, inflections dependant on age, social class, education level, professional affiliations, and the like? Again, this is a dead-end. Should I write English with different glyphs from my children? I have an expat’s accent, forever getting the length of vowels and the strength of consonants wrong; but my children, who go to nursery with all the other children in the area, speak English with an impeccable accent (so much so, they already correct their parents…).

There is only one area where we can strive for a close, one-to-one relationship between spoken sounds and the glyphs of a typeface, and that is the domain of linguists who document spoken language. (The International Phonetic Alphabet is fairly comprehensive in its coverage of sounds the human larynx can produce, and only extended when someone researching vanishing or spoken-only languages come across a new sound.)

Going down that route will take us further away from the visible form of language, and into questions that deflect from and confuse the study of typeface design; this must, by definition, be limited to the representation of characters, not of sounds. The formal qualities of the glyphs may bear many influences, from the direct (mark-making tools such as nibs and brushes) to the lofty (theories of construction); and they will normally take a long time to define a envelope of expression for each time and place (the strength of which is tested each time we come across the ‘design-by-dictat’ approach seen in Korea, Russia, Turkey, and – most recently – in the ex-Soviet states of the Caspian).

So what about the bells and whistles? Current technology promises a range of options that were not available before outside specialised environments. These must be seen as limited to the level of enriching the typographic expression of a script, but not jumping out at the level of the sounds the glyphs will generate in specific users. So, if a Polish typographer prefers diacritics to lie flat over the vowels, whereas a French one may opt for the upright ones, all the better if the font files can provide both, change from one to the other on the fly, and keep both users happy. Similarly, if the Dutch have a mark that looks like an ‘i’ and a ‘j’ close together and are taught at school this is one letter, you would hope that the whole chain of text editors, word processors, spell checkers, dictionaries and thesauri would recognise it as such. Speaking is interpreting sounds within a culturally-sensitive envelope; so is typeface design: defining glyphs within the acceptable spectrum of each character. But the designer cannot interfere where there is not linguistic ground to step on: if it means different things, go ahead and make a new glyph – but if it just sounds different, well, that’s just a reflection of the imprecise, fluid, and constantly mutable nature of human expression.

Let’s stick to mark-making, I say.