Feb 17 2014 (a 22 minutes read)
“The architecture of information” was written between October 2012 and January 2013. It led to the Reframing Information Architecture roundtable at the ASIS&:T Information Architecture Summit 2013. It also generated an interesting conversation on the nature of information architecture that you can read in A conversation on the architecture of information. A revised version was published in French as “Les architectures d’information” in Études de Communication no. 41, 2013.
This text should be considered to be the revised, up-to-date, canonical version of the original article. A few terms have been changed for clarity, as their external references have been modified since it was first published (for example, all instances of “pseudo-modernism” have been changed to “digimodernism” following Kirby), and a number of paragraphs have been edited for added clarity.
I split the original article in two, but this part one is still a long read, and dense in places. If you make it to the end and quote any of this somewhere else, please use EdC’s own entry as reference: Resmini, A. (2013). Les architectures d’information. Études de communication [Online]. Vol. 41. http://edc.revues.org/5380.
Part two has still to be published.
The paper maintains that in the epistemological shift from postmodernism to digimodernism, technological, economic, social, and cultural elements of change have thoroughly transformed the scenario in which information architecture operated in the late 1990s and have eroded its channel-specific connotation as a website-only, inductive activity, opening the field up to contributions coming from the theory and practice of design and systems thinking, architecture, cognitive science, cultural studies and new media. The paper argues, through a thorough discussions of causes and effects and selected examples taken from the practice, that contemporary information architecture can be thus framed as a fundamentally multi-disciplinary sense-making construct concerned with the structural integrity of meaning in complex, information-based cross-channel ecosystems 
The Swiss-French architect Charles-Édouard Jeanneret, better known as Le Corbusier, wrote “Vers une architecture” in 1923. The book, soon to become one of the most successful and controversial pamphlet in the history of architecture and the manifesto of Modernism, was a collection of essays and articles coauthored with purist painter and personal friend Amédée Ozenfant and originally published on the cubist periodical L’Esprit nouveau. At its core, it was a passionate call to architects to embrace the machine-inspired beauty of modernity and turn it into a novel idea of architecture. Only by leaving palaces  behind and becoming “a mirror of the times” would architecture fulfill its role for the creation of a “human milieu” (Le Corbusier, 2007, Introduction).
To Le Corbusier, this meant privileging function over form. He saw architecture stagnating and perpetuating ideals stemming from “cursed enslavement to the past”, turning “eyes that do not see” to the industrial dynamicity of modern life. While the car was redesigning not only mobility but the city itself and its social and economic hierarchies, architects were “lost in the sterile pochés of their plans” (Le Corbusier, 2007, p. 105), suffocating in routine: “(h)ouses like tabernacles, tabernacles like houses, furniture like palaces (with pediments, statues, columns spiral and not spiral), ewers as house-furniture and dishes (…) that won’t hold three hazelnuts!”.
The calculations of “(a)nonymous engineers, greasy mechanics” on the other hand exemplified this new beauty: mathematical, pure, and natural. The book is a constant praise of engineering work: airplanes, ocean liners, bridges, grain silos. These, Le Corbusier says, are architectonic examples of beauty  architects are incapable of producing. Stuck with the bourgeois mansion, they are incapable of making the house into the machine for living in it is supposed to be: “(b)aths, sun, hot water, cold water, controlled temperature, food conservation, hygiene, beauty through proportion (…). An armchair is a machine for sitting in. (…) Our era fixes its style every day. It’s right before our eyes. Eyes that do not see.” (Le Corbusier, ibid., pp. 151-156).
But the book is not only an endless string of invectives. With its “three reminders” to architects, surface, volume, an increased attention to the plan as the originator of space, with the attention accorderd to the regulating lines, light, and order, it clearly spells out the elements of this new poetics, that in time will become the foundation of Modernism. And yet, after ninety years, the single most powerful statement that still emerges clearly from “Vers une Architecture” is that “(u)n esprit nouveau souffle aujourd’hui”, that “(t)here exists a new spirit”, and that this spirit was a spirit of change. It was there for everyone to see it.
Information architecture (IA) entered mainstream practice in the mid 1990s with the wave of practising information architects led by Lou Rosenfeld and Peter Morville, but its origins stretch back to the 1960s and to pioneering work on human-computer interactions and system structure and behavior carried out at IBM Labs first and then at Xerox PARC (Resmini & Rosati, 2011a). A modern formulation arrives with Richard Saul Wurman in the 1970s: Louis Kahn’s ex-protégé brings information architecture to the American Institute of Architects (AIA), and calls himself an information architect. “I don’t mean a bricks and mortar architect. I mean architect as used in the words architect of foreign policy. I mean architect as in the creating of systemic, structural, and orderly principles to make something work – the thoughtful making of either artifact, or idea, or policy” (Wurman, 1997). More than forty years later, Wurman still stands by his definition  (VanPatter, 2005). But where does information architecture stand?
When dealing with a contemporary assessment of information architecture we must state clearly that the necessary body of knowledge the field has been discussing for more than fifteen years now (Haverty, 2002; Resmini et al, 2009) is still in the making. The Journal of Information Architecture has played a part, but academia is slow in the uptake (Dade-Robertson, 2011, p. 13). The surge of scientific interest that information architecture enjoyed in the late 1990s and early 2000s was mostly coming from the area of library and information science, from which Rosenfeld and Morville originally hailed. When the field entered the period of crisis that would then initiate change in the mid 2000s, no solid, widely shared foundations to promote information architecture in its own research space were yet visible or exploitable.
As a result, a rather large fraction of the current scientific literature on information architecture is either produced within the boundaries of other disciplines, with all the negative consequences this produces in the heavily compartmentalized academic discourse, or sadly out of touch with much of what has happened in the practice in the past ten-fifteen years and with the new, multi-disciplinary framings coming from architecture (Norberg-Schultz, 1971; Ferschin & Gramelhofer, 2004; Klyn, 2012;), urban planning (Lynch, 1960; Jacobs, 1992), cognitive science (Johnson, 1987; Dourish, 2004), systems thinking (Meadows, 2006), new media (Norman & Lucas, 2000; Manovich, 2001; Tryon, 2009) that are reshaping the theory of information architecture. While happening on the borders of academic territory, conversations about labeling, websites, and hierarchies have been replaced by conversations around sense-making, placemaking, architecture, new media, and embodied cognition (Hinton, 2009; Klyn, 2012; Fisher et al, 2012). Similarly, in the practice, websites are only one among the many information-based artifacts, either digital or physical, that information architecture contributes designing, from way-finding systems (Bussolon & Potente, 2009) to ecosystems (Rosati, 2012).
I maintain the reason for this change in focus and scope rests on the paradigm shift that in the 2000s took us from postmodernism to digimodernism (Kirby, 2009), and on socio-technological changes connected to: the general availability of broadband and mobile broadband in most of the world (Lomas, 2012; ComScoreData, 2012); the creation of large ubiquitous service ecosystems (Norman, 2009); the transfer of much control into the hands of consumers-producers (Jenkins, 2006; Kirby, 2009); the adoption, use and remediation of these systems and their meaning by the masses (Shirky, 2008) to serve ever-changing objectives, thus reinforcing the mechanism (Sterling, 2005).
Constant access to information, either through personal mobile devices or public ambient systems, has drastically changed the patterns of consumption and production that were established in the 90s. Smartphones first and tablets second have especially transformed our relationships with information (Mueller, H., Gove, J. L. & Webb J. S., 2012).
Additionally, information is also being embedded in physical space, augmenting our in-place experience of a certain location (UrbanFlow, Layar, Shadow Cities), providing us with forecasting or planning abilities (GPS, Google Maps), or adding a variety of in-context social capabilities through map-like applications (Path, FourSquare, Uber). We have created an unexpected, layered, uneven, very real version of what we believed “cyberspace” ought to be (Institute for the Future, 2009). Rather than jacking-in via cortex-level implants or through sophisticated cyborg apparatuses, people in 2013 can access “cyberspace” anytime and for any purpose from the privacy of their homes, in the quiet of a mountain top, and amidst the confusion of airports, bus stations, and crowded streets via run-of-the-mill consumer electronics. The closest we have come so far to mainstream direct bodily augmentation, the recently announced Google Glass, still configure a model where one or many information layers are superimposed over our view of the world, rather than a full-scale sensory replacement.
It is important to note that mobile itself is not the revolution, but rather the enabling layer that makes the revolution possible and provides new niches of opportunity (Kauffman, 2012). The revolution is in what constant, mobile access to connected and manipulable information allows us to do: how it allows what is digital to modify our use and perception of physical space (Dourish, 2004); how it modifies our sense of place (Tuan, 1977) simultaneously turning distances into semantics (Höök et al, 2003) and reinforcing the very idea of being in-context through the use of geolocation and map-based approaches (Norberg-Schulz, 1971; Hinton, 2009); and finally, how it turns passive receivers into wranglers constantly weaving new subjective narratives (Sterling, 2005) that potentially span all of this “cyberspace”.
The downside of being “always on” is fragmentation, a general sense of non-belonging, and loss of meaning (de Ugarte, 2012).
In 1998, in their seminal book “Information Architecture for the World Wide Web”, Peter Morville and Lou Rosenfeld could argue that the Web was the unifying factor that could bring together many different technologies and wildly diverse types of content (Rosenfeld & Morville, 1998). This was the context in which classic information architecture (Resmini & Rosati, 2012a) operated: even if computers had ceased to be cumbersome presences, computing still remained an activity with precise boundaries in space and time: usually at a desk, either in the home or the office. Once a certain task or tasks were accomplished (browsing websites, searching for information, sending or receiving email, writing documents, playing), the computer was usually switched off. When people moved away from the desk, computing did not follow.
This is also somewhat reflected in the oxymoronically disconnected nature of the World Wide Web as it comes out of Rosenfeld and Morville’s book: while they certainly stress the importance of hypertextuality, it is immediately apparent, especially in the 1998 edition, that a “website” is an artifact that lives a rather self-contained existence and that is designed in isolation. This is of course no theoretical or practical shortcoming: this was the reality of online communication design in the late 90s.
But as much as they did for the architects of the 1920s, things have changed for information architects. If “(o)nce there was a time and place for everything” (Mitchell, 2003, p. 237), in 2013 the Internet has become both so pervasive it is much more than a medium, and one piece in a larger mechanism. Today “things are increasingly smeared across multiple sites and moments in complex and often indeterminate ways” (Mitchell, ibid.) and our focus has moved necessarily away from the single artifact to consider the product or service ecosystem (Resmini & Rosati, 2009) as a complex, cross-channel (Resmini, 2012b) information-based beast. Some parts of this beast might not be online, and some might even be not digital at all.
This is a first formidable push towards change. Constantly reshaped and reconnected over an arbitrary number of different interacting channels by an ever increasing amount of actors, ecosystems force us out of the illusion of the Web as a “simpler” world, where the lenses provided by library science or graphic design seemed to be enough (Dillon, 2002). The question then is: under radically altered conditions, has information architecture really changed? If yes, how?
In a way, it certainly has. The practice has certainly caught up. When illustrating his work with American department store giant Macy’s, Peter Morville clearly stressed the value of going beyond the simple challenge offered by the website and clearly connected the different strategies across touchpoints and channels, explicitly structuring his information architecture to be systemic (Morville, 2012). Luca Rosati at the Istituto degli Innocenti in Florence, Italy, and Jason Hobbs at the Johannesburg Art Gallery in Johannesburg, South Africa, have used information architecture to solve cross-channel design issues and to handle indeterminate problems. All of these examples vastly exceed in scope and complexity the navigation, labeling and structuring model of classic information architecture (Rosenfeld & Morville, 1998).
As for how, this being medium-aspecific is probably the largest difference between contemporary information architecture and classic information architecture. I argued before (Resmini et al, 2012) that this is not a difference in nature. Rather, it rests on the primary attention to the working practice of classic information architecture: instead of focusing on the sense-making framing that Wurman originally formulated, information architects chose to define their discipline through the artifacts of the practice. In the specific parlance of the late 1990s, websites. Through the years, what was a perfectly acceptable way to frame a practical problem (what do you do?) became a paralyzing identification between a discipline in the making and some of its deliverables, methods, or tools (you are what you do). Information architecture is labeling. Information architecture is card sorting. Information architecture is wireframes. Which, as it is plain to see, is not far from maintaining that photography is but films and printouts. Boundaries that were incidental and serendipitous were turned into absolutes.
This was a predicament that was well reflected in academic research in information architecture, where if possible things were even more problematic. In a 2001 article for the Bulletin of the ASIS&T, Andrew Dillon’s perspective was still technically-minded and entrenched in engineering. Dillon maintained that one of the major limitation of information architecture at the time was that “computers in their various forms can make demands on users that stretch the patience and emotional stability of even the most sanguine”.  He went on to enumerate the problems that plagued the architecture of information spaces: “random glitches, unpredictable crashes, dead links, incompatible applications and just plain bad user support are the more likely causes of problems” (Dillon, 2001). I doubt we would (or should) consider a (software) crash a specific problem of information architecture today.
From these premises, though, Dillon concluded that “information architecture is a very real problem and one that we are not yet even close to solving. That said, I am not sure there even is a solution but an ongoing need for refinement and improvement coupled with a greater awareness of human need and contextual resources. Information architecture, as a field, needs to address such issues and counter the onslaught of technical determinism that pervades the information technology world. While the use of the term architecture has both its supporters and its critics, I feel it really can be justified in the information domain and, more importantly, used for inspiration and insight” (Dillon, 2001).
Most of the research in information architecture still dwells within those walls (Burford, 2011). Academia has still to catch up, and with the benefit of hindsight we can certainly confirm that the lack of any structured progression between 2005 and 2009 has exacted a toll. Opportunities have been lost (Arbogast, 2006) that the publication of the Journal of Information Architecture and other similar initiatives have only partially started to address. As a community of practice (Hobbs et al, 2010), the information architecture community has had a limited ability to store and disseminate knowledge without confusing it with opinions and circular discussions. In this scenario, without a solid body of knowledge to refer to and with a thousand sketchy individual inductive processes that nobody could prove right or wrong at disposal (Haverty, 2002), when users suddenly started to be able to create their own categorizations and build their own structures and architectures, some form of crisis was inevitable.
The years of the explosion of the Internet as a mature communication channel are the years in which the shift from postmodernism to digimodernism (de Ugarte, 2012; Kirby, 2009) becomes manifest. Unsurprisingly, they are also the years when information architecture faces its biggest crisis so far.
When in 2005 Peter Morville published his second book, “Ambient Findability”, he was not only widening, if somehow diluting, the boundaries of information architecture, but he was also riding a long wave of doubt that would solidly hit the practice between 2005 and 2007, and that had to do with the loss of control, centrality, and certainty brought along by the coming of user-generated content. Morville’s ambivalent take on the main “offender”, tagging and folksonomies (Quintarelli, 2005; Vander Wal, 2007), is in plain sight all through the book. “Forget about ontologies and taxonomies. Folksonomies are the future. As David Weinberger puts it, ‘The old way creates a tree. The new rakes leaves together’”(Morville, 2005, p. 139). Not only this reads as a backhanded insult (after all, what is the utility of a pile of leaves apart from calling to mind the proverbial haystack in which things get lost forever?), but Morville elaborates a few lines below that the “metaphor is perfect”, because leaves in a pile, “(t)hey rot”, and beautiful trees of all shapes and colors will feed on them and grow. He then concludes that when it comes to findability, which in the context of the book is often a placeholder for information architecture, “their inability to handle equivalence, hierarchy, and other semantic relationships causes them to fail miserably at any significant scale” (Morville, ibid., p. 139).
This was a view that many held within the community of practice, not just Morville. It was a view that was trying to conciliate the old ways with the new ways, but was unfortunately solidly rooted into the previous paradigm and still left the community, much like the architects Le Corbusier was addressing in the 1920s, totally incapable to see the direction this new wind was blowing.
From a cultural standpoint, the shift from postmodernism to digimodernism involves a change in authorship and participation models in which much control is lost. For information architecture, still somewhat proceeding in a modernist framing of absolutes, this was an unexpected turn of events and a sore blow that shook quite a few walls. But if folksonomies were a prime manifestation of the crisis within the domain, they were also just a symptom of the larger condition that accompanied the. More radical changes were in store than free tagging.
When American DJ and producer Afrika Bambaataa released his single “Planet Rock” in 1982, featuring a distinctive and catchy sampling from European avantgarde-pop group Kraftwerk, he had a hit single in his hands, but he also had an artifact that explained the new irreverent and recombinatory logic of postmodern culture perfectly. The whole hip hop music scene of the early 1980s is a very good example of the central role played by citationism, intertextuality, irony, and pastiche within early postmodernism (Bertens, 1994; McGuigan, 2006).
Very much alike, the deluge of reality tv shows and their cultural derivatives  is an expression of the rapidly decreasing centrality assigned to the author that marks digimodernism. In the late 1990s and early 2000s, “the emergence of new technologies re-structured, violently and forever, the nature of the author, the reader and the text, and the relationships between them” (Kirby, 2009) : while postmodern narrative, with its over-conscious sense of self and history, attention to intertextuality, and often overused pastiches (of styles, times, genres), still remains an authorial affair , digimodern “fetishes the recipient of the text to the degree that they become a partial or whole author of it” (Kirby, 2009).
The sense of detachment is also another primary aspect of postmodern that digimodern rejects, favoring a visceral, raw, uncut, first-person immersions within what appears to be, legitimately or because of careful directing and editing, the unfolding stream of events. Detachment is a consequence of acute self-consciousness. David Mitchell’s “Cloud Atlas”, or Umberto Eco’s “The Name of the Rose” resort to playing with mirrors, layers, and narrative-into-narrative to regain a pristine voice : most often, citations or parodies provide a simpler and generally very successful way to achieve the same effect (Rombes, 2009). The “Shrek” movie franchise is a good example: detachment, irony and re-composition (de Ugarte, 2012)  are central, necessary elements of the storytelling mechanism that fuels the green ogre’s adventures. At the same time, it is the “loops” we already know from a thousand other retellings that make the story an interesting read, albeit a very different one from that of the tradition.
From this specific point of view, postmodernism is eminently and irremediably an old media phenomenon concerned with finished products, books, films, music and the television screen, and fundamentally addressing culture as a “spectacle before which the individual (sits) powerless” (Kirby, 2012). Postmodernism paints a picture where Twitter, Facebook, Instagram, Path, and any service that relies on user co-production and constant transiency (Sterling, 2005), have no place. Digimodernist artifacts “cannot and do not exist unless the individual intervenes physically in them” (Kirby, 2012).
In this perspective, convergence (Jenkins, 2006) still delineates a largely industry-driven model of audience involvement, at least for the tv broadcasting and film industries (Tryon, 2009). It is no different for the many nuances of crossmedia and transmedia. Even though crossmedia slowly seems to assume a more static connotation, that of an “environment” where “content is repurposed, diversified and spread across multiple devices to enhance, engage and reach as many users/viewers as possible” (Iacobacci 2008), and transmedia a more dynamic, “storytelling” allure characterized by “content (that) becomes invasive and permeates fully the audience's lifestyle (...) across multiple forms of media in order to have different entry points in the story; entry-points with a unique and independent lifespan but with a definite role in the big narrative scheme” (Jenkins 2011), they remain top-down models where control is supposed to remain in corporate hands.
Digimodernist narratives turn the table on this approach. In them, just like on Facebook, “content and dynamics are invented or directed by the participating viewer or listener”. But the nomenclature is off, as these participants are not simply viewing and listening, or browsing through a website, they are actively contributing to the creation of the final artifact. These are the “wranglers” Sterling was talking about (2005), co-producers of content and structure, and a very different “user” from the one information architecture was addressing in the early years of its web-related heyday .
So it does not come as a surprise that information architecture as a construct faced a thorough moment of crisis when the paradigm shifted. User-generated content and structures such as folksonomies, digimodernist artifacts, challenged the modernist and postmodernist framing that the generation of the 1990s was applying, unaware. The very idea of information architecture was questioned and its death, due to impeding uselessness, proclaimed (Porter, 2006).
The first tentative steps to fully embrace the complexity of digimodern came with those who tried to recompose the fracture between the classical top-down taxonomic vision and the new bottom-up emergent structures (Campbell & Fast, 2006; Quintarelli et al, 2007). No single truth was to be attained in the process, no supreme order identifiable through the “calculations” of engineers, but rather a multiplicity of voices that prefigures many possible orders. In the age of Facebook, there can be no single, unified homepage or timeline, as we are all constantly remediating our sources and producing our own tailored version of services, conversations, and ultimately reality. This is precisely why information architecture, in its broader medium-aspecific “sense-making” sense, is actually central to the design of artifacts that have to deal with the changes introduced by the paradigm shift .
The primary artifact of information architecture, unlike other fields of design, is abstract: it is the arrangement and organization of the information structure that in its truest form exist primarily as a negotiation between the environment and the actor traversing it. It is sense-making and placemaking. Physical elements of the information architecture such as navigation, or search, are akin to signs in a way-finding system: parts that participate of the whole, but that even when fully collected still fall short of being the whole . The design artifact here is the specific journeys, the specific structures, that the actors in the system design for themselves, as they orientate through a service or series of connected services. It is a process of sense-making and placemaking in digital and physical space. The transience of artifacts and their constant flow create states of uncertainty and imbalance for the actors in the system that need to be counteracted to prevent degradation of the user experience, something entirely not necessary in the more passive, unidirectional crossmedia and transmedia experiences.
This is a logical and pragmatic response to the intellectual dilemmas brought forth by the shift: in true digimodernist fashion, individual perspectives are valuable not because everything is valuable, but because they embody possible individual resolutions that can be rendered resilient through persistence or proven to be transient, improving both the process and the artifacts via a bottom-up approach that epistemologically prioritizes the results over the creators, the tools and the deliverables. The decoupling from the traditional artifacts of the practice introduces two important consequences in contemporary information architecture. The first one is a conceptual shift towards indeterminacy, complexity, and abstract thinking (Fenn & Hobbs, 2013; ); the second one, brought along by the increasingly hybrid digital / physical nature of cross-channel information spaces, is a reinforced attention to the creation of a sense of place (Cresswell, 2004; Höök et al, 2003; Resmini & Rosati, 2011b) and of meaning (Norman & Lucas, 2000; Fisher et al, 2012).