London Paper Boats

Voici le duo Aether & Hemera qui nous propose de découvrir leur dernière installation au London’s Canary Wharf présente jusqu’au 15 février 2013. Appelée « Voyage », cette superbe création propose une flotte de 300 bateaux illuminés pouvant changer de couleurs et rappelant la forme de bateaux de papier.

London Paper Boats 9
London Paper Boats 8
London Paper Boats 6
London Paper Boats 4
London Paper Boats 3
London Paper Boats 2
London Paper Boats
London Paper Boats 10

Liquid Art

Focus sur le photographe allemand Markus Reugels qui nous propose de découvrir de superbes clichés de projections d’eau grâce à cette série « Splash Photography ». Avec l’utilisation de couleurs vives, le rendu de ces différentes images sont à admirer en détails sur son portfolio dans la suite de l’article.

splash
markus-reugels-9
Markus-Reugels
SONY DSC
SONY DSC
SONY DSC
6312749693_87321c84c7_b
6301504775_9b11114935_b
SONY DSC
markus reugels 9
Splash Photography9
Splash Photography8
Splash Photography7
Splash Photography6
Splash Photography5
Splash Photography4
Splash Photography3
Splash Photography2
6669164767_781cf91368_b

Light Painting Project

Découverte de « 24×360″, un projet imaginé par Timecode Lab afin d’expérimenter l’utilisation de 24 caméras pour photographier à 360° tout en proposant l’utilisation de techniques de light painting. Avec des choix esthétiques intéressants, le resultat est à découvrir en vidéo HD dans la suite de l’article.

Light Painting Project 8
Light Painting Project 7
Light Painting Project 6
Light Painting Project 4
Light Painting Project 3
Light Painting Project 2
Light Painting Project
Light Painting Project 9

"The idea is that the phone disappears"

Google Glass

Smart glasses, wearable computers and skin-mounted sensors will soon guide people through airports and shops and allow them to pay for goods and services, according to John Hanke, the head of Google Maps (+ interview).

“I think the general idea is that the phone as an object kind of disappears,” said Hanke. “People are working on skin sensors and other ways of transmitting information to us in a way that’s passive and that doesn’t require us to divert our attention in the way that we do with the phone today.”

Field Trip

In an interview with Dezeen to mark the UK launch of Field Trip, Google’s new location-based publishing app, Hanke said: “When people pull out their phones and begin interacting with an application they’re basically pulling themselves into this bubble and putting up a wall between themselves and the real world around them. So it can take on this negative, anti-social aspect whenever you’re using smartphones and the internet.”

Hanke, who invented Google Earth and runs Google’s NianticLabs division, added: “We can use information technologies to enhance your experience with the real world without taking you out of the real world.”

Dezeen in Field Trip

Field Trip works out where you are and provides information about your location from a variety of websites, including Dezeen. Users browse text and images about nearby buildings, shops and services. Field Trip launched in the USA this autumn and will be rolled out to other countries next year.

Dezeen in Field Trip

Hanke predicted that location-based technology will soon allow people to navigate within buildings and make purchases. “Google has been working on this indoor location-mapping technology that allows you to get high-fidelity, high-accuracy location inside,” he said. “So that you can, for example, find your gate in an airport, or to find a specific aisle in a store, or to find a specific exhibit within a museum.”

Earlier this year Google unveiled Google Glass (top image), a concept for spectacles that display data. These could eventually allow users to find and pay for services such as cycle and car hire: “In the future the whole transaction could happen through Google Glass, payment and everything.”

Read more about Field Trip in our earlier story and see all our stories about Google.

John Hanke

Here is a transcript of the interview between Hanke (above) and Dezeen editor-in-chief Marcus Fairs:


John Hanke: I’m John Hanke. I run a group called NianticLabs at Google, which is kind of a start-up exploring mapping and mobile technologies. For about six years before that I was the vice-president of product, in charge of Google Maps, Google Earth and Google Local.

Marcus Fairs: When did you join Google?

John Hanke: I was a CEO of a company called Keyhole that created the original Google Earth technology. We were acquired by Google in 2004. We came in and then relaunched our product as Google Earth.

Marcus Fairs: Tell us a little about that. How did it come about?

John Hanke: We started working on that around 2000. A lot of people had this vision of a map that would be this virtual globe that would have satellite imagery and would have these information layers available. We were inspired by science fiction. There’s an author named Neal Stephenson who had described a product like that in a book he wrote called Snow Crash. So there was this archetype, and it was just time for it to happen. The technology has finally made it possible to do it, so we wanted to see if we could make it real.

Marcus Fairs: There’s a very long history of maps, from people scratching lines on bits of bark through to the early maps of the world, but what difference did it make to people’s lives when they could zoom in on any point in the whole world? How did that change the way people thought about their lives and the planet they live on, as well as how they were going to get the next place?

John Hanke: I guess I would leave it to other people to draw the grand conclusions about what kind of an impact it’s made on the world. But I think it has to be a positive experience for people to be able to browse around the world and when you look at a country, to be able to zoom in and see that it looks a lot like the place that you live.

You know, when you explore China you’re not zooming into a big red polygon, you’re seeing houses and cities and a landscape that you can relate to. I mean, I think there has to be an underlying positive impact on the world for people to have that experience or exploring and seeing similarities.

Marcus Fairs: And in terms of that kind of technology, how detailed can it get? Can it get to the point where you can zoom in and figure out if your friends are in the cafe already? How much real-time data can that kind of technology provide?

John Hanke: Well as you probably know, Google is now doing indoor mapping and indoor Street View so that you can go from space to a city to the street to the inside of a cafe or restaurant and experience what it’s like inside. And you have all of these kind of real-time technologies for knowing about what’s going on, where your friends are, so I think we’re getting closer to that idea that you can know what’s happening at any place on the world at any time. It’s not fully realised yet, but we’re getting there.

Marcus Fairs: A problem with digital maps, apps and social media is that if you’re looking at your phone you’re not looking at where you are. You’re not concentrating on your route and you’re not enjoying the city, your friends, your family.

John Hanke: Yes and that’s the area that my new group is working on. Some people call this augmented reality, some people talk about ubiquitous computing, but the gist of the idea is that we can use information technologies to enhance your experience with the real world without taking you out of the real world.

People have written about this problem of the information bubble. When people pull out their phones and begin interacting with an application they’re basically pulling themselves into this bubble and putting up a wall between themselves and the real world around them. So it can take on this negative, anti-social aspect whenever you’re using smartphones and the internet.

So we would like to explore this frontier of providing information that enriches your life and your experience of the real world, but does it seamlessly by working in the background and by sequencing that information automatically.

So we launched this application called Field Trip in the US about a month ago that does that. It looks in the background for interesting things around you. We work with publisher partners who make their feeds available within that product and it tells you about cool stuff that’s nearby without you having to pull out your phone and press buttons on a UI. It all happens in the background.

Marcus Fairs: Why did you invite Dezeen on board?

John Hanke: Well I’m a huge fan of Dezeen and its great coverage of global design community, and we’re delighted to be working with you.

Marcus Fairs: How will our content be useful to someone who’s interested in architecture or design?

John Hanke: The notion is that I can take a stroll through the streets of London and the application will automatically surface a card. Or if I had a headset connected it can actually read that information to me. If I’m walking by a place that Dezeen has written about – it might be a piece of architecture or it might be a really cool local shop, it might be a great coffee shop, or some other piece of design – it will tell me about that without me having to constantly be doing searches or interrogating an app.

Marcus Fairs: How precise is the mapping technology? Could you use it to navigate around a room or does it only really work on the scale of a block or a street?

John Hanke: For what we’re doing with Field Trip right now the accuracy is tens of metres, so it’s really designed for use outdoors. We would love to have it work indoors. The location technology is really just now getting good enough to do that. The problem historically has been that GPS doesn’t penetrate indoors, so you don’t get that very high quality precision location inside of a building. You have to fall back to something like wifi-based location when you’re indoors, which tends not to be accurate.

But Google has been working on this indoor location-mapping technology that allows you to get high-fidelity, high-accuracy location inside, so that you can, for example, find your gate in an airport, or to find a specific aisle in a store, or to find a specific exhibit within a museum.

We have to do additional work to go out and map those locations to get that high-precision indoor mapping, and we’re just starting to do that.

Marcus Fairs: Will people be able to find stories from Dezeen’s archive? At the moment people tend to consume the latest stories on the site, but we’ve got years of archived stuff about buildings around the world.

John Hanke: Yeah that’s the reason I’m really excited about it. These stories that have been written over the years may not be viewed as much. But when you’re out moving through the world, when one of those stories is about a place that’s near you, it’s incredibly useful and relevant. So we want to help people discover that.

Marcus Fairs: How will Field Trip work? You download the app and tell it what they’re particularly interested in, and then the information just pops up as you navigate the city?

John Hanke: Yes. You configure Field Trip before you start the application for the first time and select the feeds that you want. There are different categories that you can select from. Then as you move around through the physical world, information from those publishers about something that is nearby automatically pops up on your phone.

Marcus Fairs: And does it work around the whole world?

John Hanke: It works around the whole world, but so far we’ve only released the application for the United States. We’re waiting in each country until we have sufficient coverage with partners before we officially launch it. The next one up for us will be the UK.

Marcus Fairs: How else can location-based services change the way we interact with real places?

John Hanke: That’s a big question. There’s a wealth of information available on the internet that makes our lives better but whenever we’re out, moving around through the real world, even though that information is there and theoretically accessible, it’s actually so tedious and time-consuming and awkward to go and retrieve that information or use those services, that we don’t. So effectively, we’re not really getting the full benefit of what’s already available on the web when we’re out moving about in the physical world.

So if we could figure out better user interfaces, where you have agents that work in the background that surface this information to us automatically, I think we can see enhancements in many areas.

Marcus Fairs: How will this information be consumed? Will we have heads-up displays on the inside of our sunglasses?

John Hanke: I think the general idea is that the phone as an object kind of disappears. People talk about wearable computing: we might have audio devices; we might have something like Google Glass. People are working on skin sensors and other ways of transmitting information to us in a way that’s passive and that doesn’t require us to divert our attention in the way that we do with the phone today.

Marcus Fairs: How do skin sensors work?

John Hanke: Researchers are working with things that are fixed to the skin; and sensations can be transmitted that way. That’s one of the more far out types of technologies. Things like the Google Glass can paint information into your field of view about what’s around you, but the idea is that all of these are a way of getting information to you that allows you to remain in the context of what you’re doing without interrupting that.

Marcus Fairs: So the phone is really a transitory bit of technology. It’s sort of between the computer and something that’s part of your clothing or your skin even?

John Hanke: I think that’s true, yes. I think the phone as the single way that you’re benefitting from information technology probably will evolve into something that looks a lot different from that and isn’t the singular artifact that you obsess over and spend all your time interacting with. I think the idea is that it sort of fades into the background and that these other mechanisms are providing the information to you.

Marcus Fairs: So you’re passively receiving information about what’s around you, but how can technology help you interact with the city?

John Hanke: There’s no reason why many of the things that we interact with today can’t have a digital interface to them. So a building or a commercial service can benefit from having a digital UI that’s telling you about it and allowing you to make a transition or allowing you to interact with it.

Marcus Fairs: Can you give an example of something like that?

John Hanke: Well anything that has a physical UI today may be much more useful and enriched with a digital one. Bike sharing would be one example. How much richer could the act of renting a bike be if it was happening through this digital interface and was projecting into something like Google Glass? I could see the bikes, I could see the road network, I could understand availability across the city. In the future the whole transaction could happen through Google Glass, payment and everything. Just imagine a digital interface to the world around you.

The post “The idea is that the phone disappears” appeared first on Dezeen.

Sony Crystal Aqua Tree

Les équipes de Torafu Architects ont conçu récemment ce projet magnifique installé à l’extérieur du Sony Building à Tokyo. Appelée « Ai No Izumi », soit la Fontaine de l’Amour, cette structure composée de LEDs colorées imite les mouvements d’eau en se calquant sur le modèle de la Fontaine de Trevi située à Rome.

Sony Crystal Aqua Tree7
Sony Crystal Aqua Tree2
Sony Crystal Aqua Tree5
Sony Crystal Aqua Tree4
Sony Crystal Aqua Tree3
Sony Crystal Aqua Tree6

Chaos Collective’s DIY DOF-Changeable Camera Hack

ChaosCollective-DOFChangeableCameraHack-setup.jpgThe POÄNG isn’t the focus of this story…

Boasting a killer combo of imaging innovation and NewDealDesign’s award-winning form factor, the Lytro camera is certainly at the top of the list for many tech geeks this year. We have yet to see if the rectilinear gadget will find a place in the serious photographer’s existing arsenal of DSLR, lenses and the like or if it is more of a mass-market plaything. (The $400 pricetag suggests that it’s somewhere in between, and it’s difficult to predict the long-term adoption and impact of images with infinite focal points.)

While there might be more to Lytro than meets eye, the folks at Chaos Collective have devised an SLR hack to approximate exactly what meets the eye in depth-of-field (DOF) changeable photos. Rather than capturing this information over space, as Lytro’s pricey micro-lens array does, they’ve repurposed good ol’-fashioned moving image recording to capture this information over time:

First, let’s briefly discuss how cameras like Lytro work. Instead of capturing a single image through a single lens, Lytro uses a micro-lens array to capture lots of images at the same time. A light field engine then makes sense of all the different rays of light entering the camera and can use that information to allow you to refocus the image after it’s been taken.

But since we only had a digital SLR hanging around the studio, we started looking at ways to achieve the same effect without needing micro-lens arrays and light field engines. The idea is simple; take lots of pictures back to back at various focal distances (collecting the same information, but over time). Then later, we can sweep through those images to pick out the exact focal distance we want to use.

But wait… A sequence of images is just a video! And since most digital SLRs these days make it super easy to capture video and manually adjust focus, that’s all you need. Just hold the camera very still (a tripod is nice, but not necessary), shoot some video, and adjust the focus from near to far. That’s it.

Since most cameras capture video at 30 fps, it doesn’t take a rocket scientist to figure out that a couple of seconds will yield 60+ ‘slices’ of focal distance. The real trick was to accurately map out the focal clarity of the image:

Of course, once we had the video, the next step was to figure out how to make a simple tool that could process each frame of video and compute the clarity of focus for various points in the frame. We ended up using a 20×20 grid, giving us 400 selectable regions to play with. Making the grid finer is simple, but we noticed that making it too small actually made it harder to calculate focal clarity. The reason: we’re looking at the difference between rough and smooth transitions in the image. If the grid is too small, smooth surfaces become difficult to accurately detect. Tighter grids also produce large embed code, so we stuck with 20×20 as grid that dense-enough without introducing extra overhead.

Here are the results:

NB: They note that the media is embedded with HTML5 video tag for cross-browser compatibilty, but as of press time the images weren’t working in Firefox…

A few more—and Chaos Collective’s “Make Your Own DOF-Changeable Image” tool—after the jump…

(more…)

New Zealand ID Student & His Mates Bring the Pixar Lamp to Life

pinokio-pixar-lamp.jpg

It’s been nearly a year since our series on the Angelpoise lamp, the iconic desk light co-opted by Pixar. Now three students at New Zealand’s Victoria University of Wellington, industrial designer Adam Ben-Dror, programming artist Shanshan Zhou and digital media artist Joss Doggett, have brought the Pixar mascot to life (renamed “Pinokio”).

Pinokio is an exploration into the expressive and behavioural potentials of robotic computing. Customized computer code and electronic circuit design imbues Lamp with the ability to be aware of its environment, especially people, and to expresses a dynamic range of behaviour. As it negotiates its world, we the human audience can see that Lamp shares many traits possessed by animals, generating a range of emotional sympathies. In the end we may ask: Is Pinokio only a lamp? – a useful machine? Perhaps we should put the book aside and meet a new friend.

It’s true that it doesn’t yet hop around and crush the letter “i” into submission, but we’re sure they’re working on it.

Seriously though, I think the trio’s project has some serious potential; when doing sewing machine repairs on my workbench, I’m constantly re-adjusting the swing-arm lamp to throw light where I need it at that moment. If the lamp could somehow track my eyes when prompted, leaving both hands free to adjust/remove/re-install the part I was working on, I’d pay good money for it. There’s got to be craftspeople out there doing other types of benchtop work that would agree.

(more…)

Panorama Perfected

How about another version of getting the photographer to be photographed? Camera features like panorama angles and stitching have to get innovate, so the next obvious step according to the designing community is to include the photographer into the snapshot, but with precision and finesse. The 360Vision Camera has a hinge connection to the display; that allows you to use it in many different angles, including self-portraits.

360Vision is a 2012 iF Design Talents entry.

Designer: Wang Tong


Yanko Design
Timeless Designs – Explore wonderful concepts from around the world!
Yanko Design Store – We are about more than just concepts. See what’s hot at the YD Store!
(Panorama Perfected was originally posted on Yanko Design)

Related posts:

  1. Picture Perfected
  2. Organic Essentialism Perfected

“At night you won’t switch on the ceiling lamp. You’ll switch on the window.”

Glowing walls, windows and furniture will replace light bulbs and LEDs in homes as OLED (organic light-emitting diode) technology improves, according to Dietmar Thomas of Philips Lumiblade (+ movie).

“Just imagine windows where transparent OLEDs are integrated,” says Thomas. “During the day the sun shines into the room and at night you’re not switching on the ceiling lamp or the wall lamp, you’re switching on the window.”

The low working temperature of OLEDs – around 30 degrees centigrade – mean that lighting source can be integrated into furniture, Thomas says, and even painted onto walls.

“OLED will open up completely new ways where light can be introduced to the customer,” Thomas says. “In the far future, say five or 10 years or so, you’ll paint the wall with a colour with OLEDs mixed into it, so when you apply a current, the whole wall lights up.”

Thomas spoke to Dezeen at the Lumiblade Creative Lab in Aachen, Germany, where we were invited to make a film about OLED technology and its future uses.

OLEDs generate light when electricity is passed through layers of organic semiconductor material mounted on glass.

“OLED is the first light source that is a surface light source,” Thomas says. “All other lights sources are point light sources, starting with the flame, the candle and going up to the light bulb and the LED. For the first time you don’t need a system to spread the light. The system is built in.”

Today’s OLEDs are less than 2mm thin and their maximum size is 12 x 12cm but in the near future they will be less than a millimetre thin and up to a metre square, Thomas predicts.

While today they are relatively expensive, prices are expected to fall dramatically: “I expect OLEDs to be in the mass market within the next five years, so everyone can buy OLED systems at IKEA,” says Thomas.

Lumiblade is the brand name of Philips’ OLED lighting products and the Lumiblade Creative Lab is used to introduce designers to OLEDs and help them develop innovative uses for the technology. Products on show at the lab include prototypes by Tom Dixon, Jason Bruges and rAndom International.

Other future uses for OLEDs include in cars, where their thinness compared to LED technology will allow car designers to provide more internal space or design shorter vehicles.

Designs developed at the Lumiblade Creative Lab include Mimosa, an interactive piece by Jason Bruges (above).

The music in the movie is a track called Mostly Always Right by 800xL. Listen to the track on Dezeen Music Project.

Here’s some text from Philips Lumiblade about OLED technology:


OLED – The new Art of Light

OLEDs (Organic Light-Emitting Diodes) represent the next step forward in the evolution of new light sources, generating light by semiconductors, rather than using a filament or gas. Like LED lighting, OLEDs provide illumination that is more energy-efficient, longer-lasting and more sustainable. It also opens exciting new doors to how we can use, integrate and ‘play’ with light for decorative, design and ambience creation purposes in our cities – in homes, offices, shops or hotels.

LEDs and OLEDs – the difference

A key difference is that OLEDs are created using organic semiconductors, while LEDs are built in crystals from an inorganic material. There are also visible differences between these two types of solid-state lighting. LEDs are glittering points of light – in essence, brilliant miniature bulbs. OLEDs, on the other hand, are extremely flat panels that evenly emit light over the complete surface. The illumination they produce is ‘calm’, more glowing and diffuse, and non-glaring.

The thin, flat nature of OLEDs also enables us to use and integrate light in ways that are impossible with any other light source. OLEDs will not replace LEDs – they have their own very specific and useful types of application. The two, however, complement each other very well, providing different options in a new era of digital lighting.

Leading the development and application of OLEDs

Philips was one of the first companies to make OLED lighting technology commercially available to architects and designers on a large scale through its Lumiblade OLED panels of different shapes, colors and structure, marketed under the name Philips Lumiblade. Furthermore, Philips’ Lumiblade Creative Lab in Aachen, Germany, gives lighting designers, luminaire manufacturers and creative minds the opportunity to get hands-on experience of OLED light as a material, and to partner with Philips in creating customized OLED solutions.

The company also has OLED product development facilities in Brazil and China, enabling close collaboration with architects and designers all over the world, and announced a EUR 40 million investment to expand production capacity at its facility in Aachen last year.

Capturing the beauty of light with OLED applications

In a highly competitive market, hotels, retailers and companies are constantly looking for ways to stand out from the crowd, as a distinct brand with a unique identity. Their image and identity are also communicated through the design and decoration of their shops, hotels or offices. Innovative lighting applications can play an important role in creating a unique ambience in these environments. Philips Lumiblade offers a range of such applications incorporating OLED lighting into eye catching products.

Philips’ LivingShapes interactive wall, the world’s largest OLED lighting installation that is commercially available today, consists of 72 OLED panels incorporating a total of 1,152 Lumiblade OLEDs. Each panel has a click-fit system, so customers can easily combine as many panels as they want, generating an interactive OLED installation within a few minutes. The installation is ideal for company headquarters, lounges, hotel lobbies or high- end residential constructions.

Philips will take interactive OLED lighting even further with the launch of the LivingShapes interactive mirror in 2012, shown for the first time at the LIGHTFAIR in Las Vegas. The interactive mirror is designed to enhance retail showrooms and enhance ambience in a hospitality setting.

Philips continues to lead the market in making OLED lighting brighter, larger and available for broader use with the introduction of its new high performance OLED Lumiblade GL350. The new OLED panel, shown for the first time in the US, offers an unprecedented combination of lumen output and size at an attractive price-performance ratio, making OLED lighting more viable than ever before for general lighting applications.

How OLEDs work

OLED lighting works by passing electricity through one or more incredibly thin layers of organic semiconductors. These layers are sandwiched between two electrodes – one positively charged and one negatively. The ‘sandwich’ is placed on a sheet of glass or other transparent material called a ‘substrate’.

When current is applied to the electrodes, they emit positively and negatively charged holes and electrons. These combine in the middle layer of the sandwich and create a brief, high-energy state called ‘excitation’. As this layer returns to its original, stable, ‘non-excited’ state, the energy flows evenly through the organic film, causing it to emit light. Using different materials in the organic films makes it possible for the OLEDs to emit different colored light.

The OLEDs currently available are mounted on glass. So far, glass is the only transparent substrate that sufficiently protects the material inside from the effects of moisture and air. However, scientists at Philips Research are investigating ways to make soft plastic substrates that will provide the necessary protection. This will open the way for bendable and moldable OLED lighting panels, making it possible for any surface area – flat or curved – to become a light source. We could see the development of luminous walls, curtains, ceilings and even furniture. Flexible OLED panels are likely to become available within 6 years.

Today, OLEDs generally have a reflective, mirror-like surface when not illuminated. Another current area of research is on the development of completely transparent OLEDs, which will open many new doors in application possibilities. Transparent OLED panels will be able to function as ordinary windows during the day, and light up after dark, either mimicking natural light, or providing attractive interior lighting. During the day, they could also function as privacy shields in homes or offices. Look out for transparent OLED panels within the next 2 years.

Product Performance (2012)

» up to 45 lm/W in different shades of white and RGB
» up to 4,000 cd/m2 brightness
» up to 15,000 hours lifetime (at 50% initial brightness)
» 1.8 mm thin
» <100 cm surface

As a rule of thumb: we expect the efficiency to double every 2-3 years.

The post “At night you won’t switch on the ceiling
lamp. You’ll switch on the window.”
appeared first on Dezeen.

Capture ideas quickly with index cards

I recently wrote a post about effective inbox management that came down to this: use as many inboxes as you need and can check reliably, and no more. One reader, “Erika in VA,” left a comment requesting more information on how I used index cards to capture information for my physical inboxes:

Loved the post, but could you explain what you normally write on the 3×5 cards? My typical physical in-box item is paper, so I’m having a hard time imagining how I might use 3×5 cards to help process “stuff.” Thanks!

Paper is Technology

Even as a technology-savvy person, I love paper and use it daily. In my experience, nothing is more flexible. Paper is pure potential. You can jot down a shopping list or solve a complex financial crisis with a pencil. In fact, paper is an example of technology, it’s “the application of scientific knowledge for practical purposes, esp. in industry. Or machinery and equipment developed from such scientific knowledge.”

I’ve been using index cards for years. There’s always a stack on my desk. I use them for several purposes, but most frequently to capture ideas, tasks, reminders, figures and so on for later reference. Here’s how and why I love using index cards.

Capture

How many times have you said, “I’ve got to remember to …”? When I have those thoughts, I know that if I don’t write them down right then and there, I’ll forget them. I must capture this “stuff.” I use the term capture to mean to create a record of that idea, thought, or bit of information that I know I’ll review later. That last bit is crucially important.

When I’m away from my desk I’ll use a notebook and pen to capture stuff. I like Fieldnotes Brand notebooks and the Fisher Space Pen, but really anything you like will do. When I’m in the car, I use my iPhone and record voice memos with Apple’s Siri.

But when I’m at my desk, it’s all about the index cards.

There’s a simple inbox on my desk that I bought at Staples. It contains a stack of unused index cards, plus any that I’ve written on during the day. When I’m working and I think of something that I want to capture, I grab a blank card and write a few words down. Just enough to trigger my memory later. For example, there’s a card in my inbox as I write this that says, “Make ‘Tally’ next week’s app.” I know that means I must review an app called “Tally” for next week.

When I’m done writing, I toss the card back into the inbox. The whole capture process takes just a few seconds, and that’s important. The more time I spend off task, the harder it will be to get back on task. Since I can jot something down in just a few seconds, I can return to whatever I was doing prior to making that note easily.

Review

At the end of the day, I process the index card notes I’ve made. This is simple to do. Just pick up each card, read it and decide:

  1. What is it?
  2. What must be done (if anything)?

Do

The first question typically has three possible answers:

  1. An action step. Something that must be done, either by me or by someone else.
  2. Reference material. This is information that doesn’t require action but could be useful in the future. Move it to your long-term storage solution.
  3. Date- or time-specific to-do item, or what I’d call a “reminder.” Add to your calendar.

That’s it. I move through each card in turn, following these steps. It’s pretty simple, but there is one important rule: go in order. It’s tempting to pass over a card that’s boring or seemingly too-much-to-think-about-right-now. If you put it back once, unprocessed, you’re likely to pass over it a second time. And a third. So, you are not allowed to put a card back and you may not alter the order.

A Matter of Trust

Earlier I mentioned that I know I’ll review my index cards at the end of the day. In other words, I trust my system. This is critically important. When my brain knows, “Yeah, he’ll look at this later. I trust him,” it stops pestering me. Imagine that you promise yourself, “I”m going to clean the basement.” Every time you walk past that basement door, your brain says, ”We ought to be cleaning the basement, you know.“ But if you make an appointment to clean the basement on Saturday at 10:00 a.m., your brain will give you a pass. “She’s put it on the calendar. We’re good.” I know in my bones that I’ll review my index cards, so no more remembering to change the furnace filter when I’m driving on Rte. 3 and can’t do a thing about it.

That’s my index card system: capture, review, do. Nice and simple. It’s quick, I trust it, and it works. I hope this answers your question, Erika!

Like this site? Buy Erin Rooney Doland’s Unclutter Your Life in One Week from Amazon.com today.