In 1898, an unassuming British stenographer hatched the idea of "garden cities" as an antidote to dirty, crowded London. Today, a resurgence of that idea is spreading through cities across the planet.
In 1898, an unassuming British stenographer hatched the idea of "garden cities" as an antidote to dirty, crowded London. Today, a resurgence of that idea is spreading through cities across the planet.
In 2011, illustrator and graphic designer Mailka Favre was commissioned by Penguin Books to illustrate the new Deluxe Classic Cover of the Kama Sutra by Vatsyayana. Using the first 7 letters she illustrated for the commission as a starting point, Favre decided to develop the full set of the alphabet, resulting in a racy Kama Sutra typeface. After creating the designs, Favre worked with animators to turn her images into actively coital gifs. Inspired by the everyday design and fashion she encounters in London, Favre’s aesthetic is bold and colorful, with clean and simple lines and curves. Favre admits she often wears the colors she uses in her designs, and she’s unsure which design choice influences which. Because her designs are so simple, Favre has to approach her work with a strong concept, something that is elegantly evident in her Kama Sutra alphabet. Each letter of the exhibition is available for purchase as a limited edition of 25 screenprints, numbered and signed by the artist.
The post Malika Favre’s Animated Kama Sutra Alphabet appeared first on Beautiful/Decay Artist & Design.
How science fiction writers inform the way we think about the real world:
Jordin Kare, an astrophysicist at the Seattle-based tech company LaserMotive, who has done important practical and theoretical work on lasers, space elevators and light-sail propulsion, cheerfully acknowledges the effect science fiction has had on his life and career. “I went into astrophysics because I was interested in the large-scale functions of the universe,” he says, “but I went to MIT because the hero of Robert Heinlein’s novel Have Spacesuit, Will Travel went to MIT.” Kare himself is very active in science fiction fandom. “Some of the people who are doing the most exploratory thinking in science have a connection to the science-fiction world.”
Microsoft, Google, Apple and other firms have sponsored lecture series in which science fiction writers give talks to employees and then meet privately with developers and research departments. Perhaps nothing better demonstrates the close tie between science fiction and technology today than what is called “design fiction”—imaginative works commissioned by tech companies to model new ideas. Some corporations hire authors to create what-if stories about potentially marketable products.
In an earlier post in this series, I examined the articulatory relationship between information architecture and user interface design, and argued that the tools that have emerged for constructing information architectures on the web will only get us so far when it comes to expressing information systems across diverse digital touchpoints. Here, I want to look more closely at these traditional web IA tools in order to tease out two things: (1) ways we might rely on these tools moving forward, and (2) ways we’ll need to expand our approach to IA as we design for the Internet of Things.
The seminal text for Information Architecture as it is practiced in the design of online information environments is Peter Morville’s and Louis Rosenfeld’s Information Architecture for the World Wide Web, affectionately known as “The Polar Bear Book.”
First published in 1998, The Polar Bear Book gave a name and a clear, effective methodology to a set of practices many designers and developers working on the web had already begun to encounter. Morville and Rosenfeld are both trained as professional librarians and were able to draw on this time-tested field in order to sort through many of the new information challenges coming out of the rapidly expanding web.
If we look at IA as two faces of the same coin, The Polar Bear Book focuses on the largely top-down “Internet Librarian” side of information design. The other side of the coin approaches the problems posed by data from the bottom up. In Everything is Miscellaneous: The Power of the New Digital Disorder, David Weinberger argues that the fundamental problem of the “second order” (think “card catalogue”) organization typical of library sciences-informed approaches is that they fail to recognize the key differentiator of digital information: that it can exist in multiple locations at once, without any single location being the “home” position. Weinberger argues that in the “third order” of digital information practices, “understanding is metaknowledge.” For Weinberger, “we understand something when we see how the pieces fit together.”
Successful approaches to organizing electronic data generally make liberal use of both top-down and bottom-up design tactics. Primary navigation (driven by top-down thinking) gives us a birds-eye view of the major categories on a website, allowing us to quickly focus on content related to politics, business, entertainment, technology, etc. The “You May Also Like” and “Related Stories” links come from work in the metadata-driven bottom-up space.
On the web, this textually mediated blend of top-down and bottom-up is usually pretty successful. This is no surprise: the web is, after all, primarily a textual medium. At its core, HTML is a language for marking up text-based documents. It makes them interpretable by machines (browsers) so they can be served up for human consumption. We’ve accommodated images and sounds in this information ecology by marking them up with tags (either by professional indexers or “folksonomically,” by whomever cares to pitch in).
There’s an important point here that often goes without saying: the IA we’ve inherited from the web is textual — it is based on the perception of the world mediated through the technology of writing; herin lies the limitation of the IA we know from the web as we begin to design for the Internet of Things.
We don’t often think of writing as “technology,” but inasmuch as technology constitutes the explicit modification of techniques and practices in order to solve a problem, writing definitely fits the bill. Language centers can be pinpointed in the brain — these are areas programmed into our genes that allow us to generate spoken language — but in order to read and write, our brains must create new connections not accounted for in our genetic makeup.
In Proust and the Squid, cognitive neuroscientist Maryanne Wolf describes the physical, neurological difference between a reading and writing brain and a pre-literate linguistic brain. Wolf writes that, with the invention of reading “we rearranged the very organization of our brain.” Whereas we learn to speak by being immersed in language, learning to read and write is a painstaking process of instruction, practice, and feedback. Though the two acts are linked by a common practice of language, writing involves a different cognitive process than speaking. It is one that relies on the technology of the written word. This technology is not transmitted through our genes; it is transmitted through our culture.
It is important to understand that writing is not simply a translation of speech. This distinction matters because it has profound consequences. Wolf writes that “the new circuits and pathways that the brain fashions in order to read become the foundation for being able to think in different, innovative ways.” As the ability to read becomes widespread, this new capacity for thinking differently, too, becomes widespread.
Though writing constitutes a major leap past speech in terms of cognitive process, it shares one very important common trait with spoken language: linearity. Writing, like speech, follows a syntagmatic structure in which meaning is constituted by the flow of elements in order — and in which alternate orders often convey alternate meanings.
When it comes to the design of information environments, this linearity is generally a foregone conclusion, a feature of the cognitive landscape which “goes without saying” and is therefore never called into question. Indeed, when we’re dealing primarily with text or text-based media, there is no need to call it into question.
In the case of embodied experience in physical space, however, we natively bring to bear a perceptual apparatus which goes well beyond the linear confines of written and spoken language. When we evaluate an event in our physical environment — a room, a person, a meaningful glance — we do so with a system of perception orders of magnitude more sophisticated than linear narrative. JJ Gibson describes this as the perceptual awareness resulting from a “flowing array of stimulation.” When we layer on top of that the non-linear nature of dynamic systems, it quickly becomes apparent that despite the immense gains in cognition brought about by the technology of writing, these advances still only partially equip us to adequately navigate immersive, physical connected environments.
According to Meadows, we learn to navigate systems by constructing models that approximate a simplified representation of the system’s operation and allow us to navigate it with more or less success. As more and more of our world — our information, our social networks, our devices, and our interactions with all of these — becomes connected, our systems become increasingly difficult (and undesirable) to compartmentalize. They also become less intrinsically reliant on linear textual mediation: our “smart” devices don’t need to translate their messages to each other into English (or French or Japanese) in order to interact.
This is both the great challenge and the great potential of the Internet of Things. We’re beginning to interact with our built information environments not only in a classically signified, textual way, but also in a physical-being-operating-in-the-world kind of way. The text remains — and the need to interact with that textual facet with the tools we’ve honed on the web (i.e. traditional IA) remains. But as the information environments we’re tasked with representing become less textual and more embodied, the tools we use to represent them must likewise evolve beyond our current text-based solutions.
In order to rise to meet this new context, we’re going to need as many semiotic paths as we can find — or create. And in order to do that, we will have to pay close attention to the cognitive support structures that normally “go without saying” in our conceptual models.
This will be hard work. The payoff, however, is potentially revolutionary. The threshold at which we find ourselves is not merely another incremental step in technological advancement. The immersion in dynamic systems that the connected environment foreshadows holds the potential to re-shape the way we think — the way our brains are “wired” — much as reading and writing did. Though mediated by human-made, culturally transmitted technology (e.g. moveable type, or, in this case, Internet protocols), these changes hold the power to affect our core cognitive process, our very capacity to think.
What this kind of “system literacy” might look like is as puzzling to me now as reading and writing must have appeared to pre-literate societies. The potential of being able to grasp how our world is connected in its entirety — people, social systems, economies, trade, climate, mobility, marginalization — is both mesmerizing and terrifying. Mesmerizing because it seems almost magical; terrifying because it hints at how unsophisticated and parochial our current perspective must look from such a vantage point.
As information architects and interface designers, all of this means that we’re going to have to be nimble and creative in the way we approach design for these environments. We’re going to have to cast out beyond the tools and techniques we’re most comfortable with to find workable solutions to new problems of complexity. We aren’t the only ones working on this, but our role is an important one: engineers and users alike look to us to frame the rhetoric and usability of immersive digital spaces. We’re at a major transition in the way we conceive of putting together information environments. Much like Morville and Rosenfeld in 1998, we’re “to some degree all still making it up as we go along.” I don’t pretend to know what a fully developed information architecture for the Internet of Things might look like, but in the spirit of exploration, I’d like to offer a few pointers that might help nudge us in the right direction — a topic I’ll tackle in my next post.
I've seen a lot of hand-wringing from techies in San Francisco and Silicon Valley saying "Why are we so hated?" now that there's been a more vocal contingent of people being critical of their lack of civic responsibility. Is it true that corruption and NIMBYism have kept affordable housing from being built? Sure. Is it true that members of the tech industry do contribute tax dollars to the city? Absolutely. But does that mean techies have done enough? Nope.
First, some perspective: The leaders of the technology industry in Silicon Valley are among the richest people who have ever lived in the history of the world. That's some crazy shit right there. And I know firsthand, from living in New York City where we have an egregious, unacceptable and immoral level of economic inequality that these are difficult problems to face. But the two biggest reasons techies in New York don't face the same blowback are because 1. We have the finance industry to shield us by being more disgusting than tech in almost every regard and 2. Our local technology community has a very strong ethos of community involvement, with the expectation that people who work in tech will also be involved in their community to solve bigger problems. It's just that simple.
So, what can folks who work in tech in San Francisco do to defuse the widespread resentment of their impact on the city? Here are a few simple suggestions:
The ridiculous crazy thing is, these ideas, while nowhere close to real solutions, were meaningful first steps that I came up with in five minutes after a friend raised this issue in frustration. As far as I've seen, none of even these simple initiatives have been tried yet. The reason techies are getting a hard time in San Francisco is because they haven't done even the first steps. Now, to be clear: We need to get our shit in order in New York City, too. While most of our startups aren't big enough to have the infrastructure to do these things, and we don't have commuter shuttles, there's no reason we couldn't adapt versions of these initiatives. But already, disaster-related hackathons like those around Sandy have helped our neighbors, hosting Mayoral forums that raise substantive issues has contributed to our political discourse, drafting meaningful public policy suggestions has driven the dialogue about how we best serve all our citizens, and volunteers across the city put their tech skills to work serving the needs of their neighbors. All of these things are just as possible in San Francisco.
Now I'm not saying these take the place of real, substantive, long-term engagement and investment. They obviously do not. But these could be useful bits of progress if they lead to people in the technology industry approaching their civic institutions with humility and respect, listening to their neighbors honestly and openly, and making commitments instead of demands. I joke around a lot about San Francisco as a New Yorker, but having lived in San Francisco for years, I know it can be a great city. So act like it, and be worthy of it.