March 16, 2009
The other day I popped into my local branch of Lloyds TSB and saw this next to the ATM:
Taking a closer look:
A heatmap shows the busiest times for that particular branch, with a bit of analysis above to help people make sense of it and explain, for example, the grey block on wednesday morning (staff training day).
Short interview with the guys who put it up
This is very interesting – a bank visualising its data to change customer behaviour. Rebecca Reeves, the branch manager, was kind enough to answer a few questions about it:
Raphael D’Amico: Thanks for agreeing to explain this display a little bit. So, why did this start?
Rebecca Reeves: We noticed that we had both business and personal account holders coming in the lunchtime rush hour, even though business customers can generally choose to come at any time of the day. The idea behind this was to try to get our business customers to come by when the branch was quieter.
RD: How does it work?
RR: It uses the transactions done in the branch. We record the data and a team at the head office feeds back these heatmaps.
RD: Is it just this branch?
RR: No, it is done across the country.
RD: Has it worked?
RR: It has actually. We started recording data about a year ago, and put the first heatmap on the wall six months ago. When we analysed the data again quite recently we saw that customer transactions were more spread out across the day.
In particular, it didn’t make that much of a difference to personal customers – they still came mostly at lunchtimes – but business customers did start coming more often at other times.
RD: How did you measure the improvement? Did you measure queue lengths, for example?
RR: Just by sight – the only formal measurement was the transaction data, which tells us the time and type of customer, for example.
RD: Thanks for your time.
Neat, but what could it do better?
This idea is clearly a good one and has worked, but there is as always room for iteration. Here are a few suggestions:
- Make it bigger and move it slightly further from the cashpoint. Putting it next to a cashpoint is a good idea and gives it exposure, but the size and positioning means you have to be close to the wall to see it. This leads to two less than ideal situations:
1) You look at it while taking cash out, which slows you down and makes people behind you wait.
2) You take a proper look afterwards, which means you have to stand directly next to and very close to the next person taking money out, which tends to make both you and them uncomfortable. It is almost a social taboo to do this, and probably keeps a a fair few people away.
A larger, more legible display would solve this.
- Put it near to other queues in the branch, not just the ATM. The queue for the bank teller is longer than the one for the ATM, which would make customers even more receptive to this kind of display.
- Measure how long you are actually taking to serve customers. While the transaction data is a good proxy, Lloyds should spot check exactly how long it takes them to serve each customer (how do they promise four minutes?). This may also allow them to segment their customers better – perhaps there are some transactions that are more time consuming and could be addressed in the heatmap display.
- Show customers the changes. Showing people that this display has already changed behaviour may make it even more effective through social proof.
- Share data. Comparing customer patterns across branches might reveal some good techniques they can learn from each other. I didn’t ask about this, so it could be that the branches already do this – I imagine the data analysis is centralised for this purpose.
It’s really great to see a large organisation using this kind of technique (particularly a bank, right now!).
Are there other companies feeding the behaviour of their customers back to them?
February 11, 2009
Human centered design teaches you to optimise like crazy for the user of your product, but is there a situation where this leads to a design which puts off the person paying the bill?
I spotted this review of one of Dyson’s high tech and rather stylish hoovers. Is this user happy that some of his/her clients have paid a considerable amount more to acquire one?
“I´m out in the field, hoovering for my disabled clients in their own homes, amongst other duties a carer has to do. Everytime I know there is whatever type of Dyson in the household, I simply bring my own hoover along. I mean, a 12GBP one from Tesco, made in China. Oh yeah, you don´t need a NASA training to use such a thing. It´s got two buttons. The ON/OFF one and the other that coils the cord in a flick of a second (unheard of, Mr.Dyson?). I mean, not cutting and “eating” it by the front. And guess what? I can lift the whole thing with my little finger, put it inside a small backpack and catch a crowded bus with it, if I wanted to. See, hoovering itself is annoying enough so I don´t really see the point in battling a StarTrek ship to make it easier on you! Or is using Dyson suppose to be Fun? Well, it CERTAINLY is not the quality of the job it leaves behind, that would make me buy it. In fact, I have never came accross a less effective hoover. So the only reason for Mr. Dyson still not going out of bussiness I can think of is that people buy his products because ITS SAID TO BE GOOD and all other marketing and status tricks, but in the end it´s their au-pairs, cleaning ladies and people like myself that have to put up with about 8times as much of job then SIMPLE HOOVERING.”
This review raises a very important question: should you compromise your design if it will get it into the hands of more users?
Who are you designing for?
One of my favourite applications by far is Tableau. I use it every day and am stunned by how simple it is to create clear visualisations of large datasets. It’s blazingly fast to use and defaults to an elegant visual style which puts the data first – check out the nicely put together product tour here. It makes it easy to create dashboards like this:
Its main competitor is Crystal Excelsius, which sadly makes it easy to create awful dashboards like this:
This is not a post about data visualisation, but in a nutshell the problem with Crystal Xcelsius is that it focuses far too much on the cosmetic aspects which add nothing (and often detract from) the data behind the dashboard. I will let Jorge Cameos and Stephen Few explain its problems in more detail (both of these blogs are mandatory reading for anyone dealing with large amounts of data, incidentally).
The question is this: how many of Tableau’s sales have been taken by Crystal Xcelsius because of its fancy effects?
This is Excel 2003:
Excel 2007. Notice how some of the most important options have been moved from radio buttons (1 click) to dropdowns (2 clicks) and how its most important sections (e.g. scale and patterns) have been partially lumped together to leave more room for irrelevant 3D and formatting effects.
Who is Microsoft trying to appeal to?
There is a clear disconnect between the needs of the user, who would likely benefit from simpler chart creation and the buyer, who may be swayed by the additional features (“What harm could they do?”). On top of this, the user of the software is not the ultimate user; that place goes to the person trying to make sense of the final chart.
A few thoughts:
- How can you make sure you are designing for the right person? Sometimes the ultimate user is not who you think they are.
- This conflict between user and buyer does not necessarily mean two people or departments. It is within us all.
- Can you tell whether you are selling to the user or the buyer? Perhaps you can show different aspects of your design to each.
What other examples of this are there?
January 18, 2009
I managed to make it down to BETT on Saturday, and found that yet again, the golden rule of conferences continued to apply: the bigger your stand, the less likely you are to be cutting edge. Here are a few of the things that caught my attention:
Okay, so I’ve just broken my golden rule, but getting to finally play on the Surface was genuinely cool (I promise not to break it again). The technology has been covered absolutely everywhere so I won’t go over it again (not seen it? this is cheesy but gives a good idea); what was interesting was to see some examples of it helping kids get more engaged with learning.
The kids above are playing a spelling game. Each of those little round tokens scattered on the table has a letter on it, and the aim is to press them in the right order to spell the word in the picture. The trick is that you have to press and *hold* them, so to do it before the time runs out you need a few more pairs of hands than just your own.
Why this matters? It was really fun, and quickly got everyone talking to each other. In this particular example, you have the engagement of a well designed game with the quality control that a computer system can bring, highlighting just how useful these table sized displays may become.
Apart from that, there was 3D virtual heart which you could fly around in, a drawing program where you could smear virtual paste around, and of course the usual table sized google maps (the kids absolutely loved that one, see picture)
This makes me really look forward to the rise of tabletop computing (and bartop, of course…).
Guardian news tools
Also interesting were a pair of projects by the Guardian, a major UK national newspaper, to get pre-teen children engaged in the news whilst at the same time teaching them valuable critical reading and writing skills.
The first project, LearnNewsDesk, was a large database of simplified news articles arranged into easily digestible chunks under each school subject area, with exercises and a glossary attached to each story. Kids can also upload podcasts and articles with their own takes on the news. New articles are added daily so the site is a good approximation of the real news.
Why this matters? It’s a good reminder that to be useful and remembered, information must be aware of its context. If you know that context you can add all kinds of metadata as a hook to help that information stick in the mind. In this case, the system provides a great sandbox which teachers can use to help young kids understand some important issues.
Project number two, Newsmaker was the flipside of the news desk, giving children a very simple collaborative, web based tool to create their own paper (see the solitary picture below). At the heart of it is a fixed template into which kids can put their own articles and picture.
One kid gets to be the editor whilst the others take on the roles of journalists and picture editors, with simple word processing and picture editing tools to insert their work. This is cool as it 1) provides a very simple platform for collaboration and 2) lets groups of pupils easily create a professional looking paper.
Why this matters? Easy. Somewhere along the way of trying to teach desktop publishing to ten year olds, schools have forgotten that just learning to lay something out with Microsoft Office is not enough – you need to have something to say with it too.
Peter Molyneux, maker of legendary and beautiful games, once said about hiring 3D artists (and I paraphrase as I can’t find the actual quote):
“It’s easier to teach a great artist to use a 3D modelling tool than to teach an expert in 3D Studio Max or Maya to be a great artist”
Tools like these which do one thing very well are a great way to get straight to the meat of what you are trying to do. Would you rather spend a lesson bogged down in the technicalities of Microsoft Publisher or actually publishing something?
Autology is a natural language search tool which does three rather interesting things.
- “Push” search. It watches what you are typing in Word or equivalent and can suggest relevant information. This is apparently already being used by MI5, the FBI, BBC, Reuters, Merrill Lynch, the Deutsche Bundesbank and IBM, and is yet another step towards computers seamlessly integrating with our processes. Instead of having to go to Google, Google comes to you.
- Vertical search. It indexes hundreds of high quality secondary school textbooks to increase the relevance of the results. Search focused on particular verticals is clearly going to be an interesting area if generalised search engines reach an upper limit to their accuracy
- Search folders. With this feature, a teacher or student can create a themed folder which gets automatically loaded with documents relevant to a particular search query. This is yet another example of dynamically structuring information to be most useful. If you’ve ever used a smart playlist in iTunes, you’ll know how handy this is, and it’s becoming increasingly important as information multiplies.
Why this matters? I can’t really comment on the theoretically improved quality of natural language vs keywords as I din’t have long enough to test it, but the push search is fascinating nevertheless. In the words of David Black of Autology:
“It is pattern recognition technology which is able to push relevant information to a user. There is no need to go and search for it. It can be pushed to you conceptually, matched to what you are writing about.
“It is like a student sitting in a library and as they are writing their essay somebody keeps coming up to them saying ‘you are writing about this, have you seen this?’”They are not having to go and get it. It’s as near as you are going to get to artificial intelligence.”
From the Birmingham Post
Computers have gone from filling a room to pocket sized, but still require us to play by their rules to get the most out of them. The next step is technology which knows what you need and discreetly sends it your way. Autology may be another tiny leap in this direction.
More highlights coming soon…
What caught your attention at BETT 2009?
January 15, 2009
On a completely different note: here’s a hugely inspired ad for Goldstar Beer.
Incidentally, these guys have done some even more brilliantly edgy funny stuff: click here.
January 14, 2009
In my last post I mentioned a fascinating essay by Stephen Downes: “The Future of Online Learning: Ten Years On”. In it, he revisits and updates his 1998 predictions about the future of education, most of which are apply heavily to business, which is already an environment where people have to learn at different paces. Here are five of the most interesting ideas from this long and thoughtful piece.
1. One task to rule them, and in the mashup bind them
“In 1998 I wrote that computer programs of the future will be function based, that they will address specific needs, launching and manipulating task based applications on an as needed basis. For example, I said, the student of the future will not start up an operating system, internet browser, word processor and email program in order to start work on a course. The student will start up the course, which in turn will start up these applications on its own.”
This paragraph is related to one of the big changes caused by the move towards a world where we perform complex tasks using an array of interconnected web applications, each with simpler functionality and hosted on an array of increasingly smart devices, each serving a more specific purpose and connected to each other via the Cloud. UNIX junkies with their tiny command line applications will be overjoyed. Developers of wonderful, hulking, multi-purpose applications (Microsoft and Office, Adobe and Creative Suite, Autodesk and AutoCAD) will find their most casual users chipped away.
The big challenge for designers of these tools is twofold: 1) their applications need to be open, and interoperate properly and, 2) the user experience will somehow need to be consistent enough not to confuse people.
Resources like http://www.programmableweb.com/ and organisations like http://www.dataportability.org/ are helping the industry to make headway on the first point. Thanks largely to Google Maps, the word mashup is now commonplace, and tools like Yahoo Pipes, Microsoft Popfly, JackBe, Dapper, Kapow, IBM’s QEDWiki, Proto, BEA AquaLogic and RSSBus.” (more here). This matrix is also pretty cool.
Number two is tougher, as there is more of a grey area between it working and not than when grabbing data from another service. However, it is crucial for designers to keep an eye on the development of standards in interaction design. For the web, frameworks like YUI which allow standard controls to proliferate are useful, although they must still be carefully used. Physical devices are an entirely different issue, and can turn an accepted way of interacting on its head (e.g. iPhone and touch and the Wii with motion sensing).
The main opportunity: get to know your customer and you will be able to meet their specific need better, faster and more cheaply than ever before.
2. Many screens – a.k.a. letting information come to us
“In 1998 I wrote that ‘The PAD will become the dominant tool for online education, combining the function of book, notebook and pen.” The PAD, I said, would be “a lightweight notebook computer with touch screen functions and high speed wireless internet access.” I also said it would cost around three hundred dollars…”
“With slim, lightweight technology, truly useful and portable PADs will be widely available within the next ten years. We have already seen significant improvements in screen technology, including slim touch-sensitive screens. Wireless access and cloud computing make bulky storage devices unnecessary; what local memory is needed will be more than adequately managed using tiny flash memory chips. Improvements in battery life and solar power will mean that these low-wattage portable computers will run for days. They will, as I suggested before, come in all shapes and sizes, from a slim pocket version (much like the iPod touch) to a notepad version..”
“The same technology that makes PAD technology possible will continue to proper improvements in large screen displays (devices I nicknamed WADs (Wide area Displays) ten years ago).
“In the future, it will be common to see these large-area displays hanging on living room and classroom walls. Instead of being the size of small windows, they will be the size of large blackboards. They will be touch sensitive (or if not, connected to a pointer tracking system device similar to the ones being cobbled together for less than $50 by Wii enthusiasts (Lee, 2007)) or included with any of a number of children’s educational webcam games today (such as Camgoo, among many others).”
For too long we have bent over backwards for computers, limited to a (relatively) small screen and a computer taking pride of place on our desk. In the future, the opposite will be true. We are surrounded by information. In the future, we will use an array of different devices to access it – from iPhone style handhelds for simpler tasks to desk and wall sized interactive touchscreens for the bigger ones:
“…imagine that any environment that contains a flat surface can become a teaching environment, one where your friends’ faces (or your parents’ or your teachers’) can appear life-size on any old wall or on a table surface as you converse with them from the next room or around the world. We have already seen how the availability of mobile telephones has transformed society in less than a generation. (New Media Consortium, 2008) Having much more powerful, much more expressive, communications technology available everywhere will have a similar impact.”
3. If it ain’t fun, forget it
“A great deal has been written in the last few years about educational games or, as they are sometimes called, ‘serious games’. (Eck, 2006) In 1998 I wrote that “educational software of the future will include every feature present in video games today, and more.” Though this hasn’t proven to be strictly true, it is largely true, and probably no more true than in the domain of games and simulations.”
“In 1998, I wrote the following: “To give a student an idea of what the battle of Waterloo was like, for example, it is best to place the student actually in the battle, hearing Napoleon’s orders as they become increasingly desperate, feeling the recoil of one’s own musket, or slogging through the mud looking for a gap in the British cannons.” (Downes, The Future of Online Learning, 1998) Today we can say that the creation of such simulations will not be simply the domain of large production houses, but will rather be more and more the result of massive collections of small contributions from individual players. And that the creation of content – any content – needs to take this phenomenon into account, or be seen as abstract and sterile.“
Giving people a chance to experience a situation they are learning about is an unusually good way of making sure they understand it. The humain brain is playful. As such, give it a complex environment to experience and you can guarantee that it will start pushing, pulling, prodding and generally attempting to find the way it works – trying to work out the rules.
Imagine trying to teach music by showing someone only the score to Mozart’s Requiem, or art appreciation by describing one of Turner’s sunsets. Ultimately, our subconscious minds are much more attentive than our conscious, which is why we get so much more depth from an experience than from a description.
4. Personalised learning, group evaluation
Another big idea is that of personalised learning environments. Instead of having students chug through a defined syllabus with standardised tests to mark the pace, the educational institution’s responsibility will be to connect them with projects, resources, games and members of the community around that domain. As they get more and more involved:
“…each person will have what may be thought of as a ‘profile’ of their own art, music and other media, which they have created themselves or with friends, along with records of their activities in various games and simulations (we see things like this already with applications like Launchcast) that take place both on and off line.”
What is really interesting is how all this will be tested:
“In the end, what will be evaluated is a complex portfolio of a student’s online activities. (Syverson & Slatin, 2006)These will include not only the results from games and other competitions with other people and with simulators, but also their creative work, their multimedia projects, their interactions with other people in ongoing or ad hoc projects, and the myriad details we consider when we consider whether or not a person is well educated.”
“Though there will continue to be ‘degrees’, these will be based on a mechanism of evaluation and recognition, rather than a lockstep marching through a prepared curriculum. And educational institutions will not have a monopoly on such evaluations (though the more prestigious ones will recognize the value of aggregating and assessing evaluations from other sources).”
“Earning a degree will, in such a world, resemble less a series of tests and hurdles, and will come to resemble more a process of making a name for oneself in a community. The recommendation of one person by another as a peer will, in the end, become the standard of educational value, not the grade or degree.”
5. Learning resources will annotate the world
“Online learning stiff suffers from the misperception that it is about having students sit in front of their computer screen for extended periods of time. As a consequence, the idea that online learning might foster independence of place has been missing in much of the discussion of the field. (…) That said, with the recent development of smaller and lighter wireless-enabled devices, we are approaching the era when online learning will also be seen as mobile learning. Students will be freed from the classroom, and freed from the stationary desktop computer. And as I said last time, true place independence will revolutionize education is a much deeper sense than has perhaps been anticipated.”
Much of what goes on about us has a history and a significance that we miss completely, whether it’s the context in which a piece of technology was developed or the story behind a piece of architecture. In a more concrete business context, it might be the profitability of a piece of machinery or the childcare problems of an employee you have a meeting with in 10 minutes, which are affecting his ability to concentrate.
Well designed learning resources have the potential to guide us through the physical world rather than pulling us away. Incidentally, that’s why walking tours of cities can be so interesting – you see these layers peeled back for you.
More to come.
January 11, 2009
Next Wednesday is the start of BETT 2009, the world’s largest educational technology event. 30,000 teachers will be learning about the best, coolest new ways of helping others learn.
This is a very important event, and not just for teachers.
Of technology’s many contributions to human civilisation, education is where the rubber hits the road. Remote learning, electronic paper, digital note taking, individualised curricula, etc… are just the latest episodes in the series which started with the drawing of shapes in the sand.
What separates us from animals is how good we are at transferring knowhow to our children, which allows each generation’s knowledge to become a foundation for the next to build on. However, we are limited by the length of education – older brains may learn more slowly and, anyway, most of us start work at the end of our teens. In the UK, half of adults stopped at or before 16 (data, key). Slightly higher in the US.
As such, if we can make better use of those 12–15 years we can give a whole population a headstart. Humanity on steroids, if you will.
Knowledge is a performance enhancing drug.
As we get a better handle on the rules of learning we can make better tools to help apply them, and to teach our teachers to apply them. Furthermore, what goes in the classroom is only the beginning. The trend towards more decentralised, personalised learning is exactly what we need after formal education. These same tools and techniques may help us with our lifelong learning – whether training to better do our jobs, learn new skills or pursue our hobbies.
Every designer creating tools to help us visualise, manipulate, remember and use information needs to keep a close eye on teaching and education. With more and more people making a living in the information economy, each new tool is another potential mind hack.
The opportunities are huge.
One caveat, which will be the subject of another blog post. We must never forget the context in which education takes place. For now, this anecdote is as illustrative as any:
“…black students who study hard are accused of “acting white” and are ostracised by their peers. Teachers have known this for years, at least anecdotally. [Roland] Fryer found a way to measure it. He looked at a large sample of public-school children who were asked to name their friends. To correct for kids exaggerating their own popularity, he counted a friendship as real only if both parties named each other. He found that for white pupils, the higher their grades, the more popular they were. But blacks with good grades had fewer black friends than their mediocre peers. In other words, studiousness is stigmatised among black schoolchildren. It would be hard to imagine a more crippling cultural norm.”
It’s not just the black-white issues. Students of all backgrounds have different motivators to take into account.
Here are some thought provoking resources and events on education:
- Go to the BETT TeachMeet, a pecha kucha style event from 6–9pm in Friday 16th. What is pecha kucha? 20 slides * 20 seconds = six minutes and 40 seconds on whatever, in this case exciting ways people have been using technology to teach.
- See what happened at BETT 2008. Podcasts. Summary video.
- Read about education in 2018. Stephen Downes wrote a paper called The Future of Online Learning which looked 10 years ahead from 1998. He was mostly right, and has now written a fascinating follow up available here. Most interestingly: we learn better by doing, so how can we use games to engage students with memorable simulations? As interestingly, learning may shift towards overlapping communities centered both around knowledgeable peers and trained teachers.
- Attend an unconference. Education2020: “If you want to attend an informal, congenial, stimulating event in an amazing location with brilliant and insightful people (including you, of course), then pop along to the Education2020 UNCONFERENCE wiki and get your name on the list. Not only will you be able to enjoy a great educational debate and discussion, you will also be travelling to one of the most beautiful places in Scotland.”
- Listen to some podcasts. EdTechRoundup: “conversations about using technology in education”
- More – 2020 and beyond. How about another point of view? In this paper, FutureLab looks at the impact of “personal devices, intelligent environments, computing infrastructure, security and interfaces”.
- Not enough? See 2025 and beyond. http://www.beyondcurrenthorizons.org.uk/
- Informal learning. VISION magazine issue 8, page 9.
More on BETT coming soon.
January 4, 2009
The British Library is currently running an exhibition charting, in its own words, “the 900-year struggle for rights and freedoms in the British Isles” and by association around the world.
What rights and freedoms? Liberty and the rule of law, the right to vote, freedom from want, freedom of speech and belief, having a say in how we are governed.
If you are in London, it is worth going just to see originals of documents like the Magna Carta (which first stated many of the principles of the rule of law), the Habeas Corpus Act (which enshrined the right to freedom from unlawful imprisonment), the King James Bible (the first sanctioned English language bible), Hobbes’s Leviathan (the social contract between the ruler protector and his people), and the Bill of Rights (the closest Britain has come to a written constitution).
Most of the exhibits are online and can be seen here. Click the ‘timeline’ link next to each section to see where they fit in.
For those of us who are members of Generation Y, it pays to remember that universal suffrage (in Britain) is as old as our grandparents and the Universal Declaration of Human Rights as old as our parents. We cannot take all our freedoms for granted.
“Please enter your citizen number”
However, there is another reason to write about this: the brilliant use of interactive technology. Aside from being very, very cool, the interactive booths and online visualisations make sure this exhibition stays stuck in your head.
When you enter you are invited to take one of these:
This wristband has a barcode and “citizen number” on it which you can use to register on booths scattered around the exhibition. Each booth allows you to vote on some of the issues presented in the exhibits (“Should voting be compulsory”, “Should we all have the right to die”, “How free should the press be”, etc…). The system tracks your answers, and at the end you can see how they compared to everyone else’s. You can even enter your citizen number on the Taking Liberties interactive site, where you can get more info, watch videos and check out the visualisations in the comfort of your own home (pictured below).
My citizen number is 142423, feel free to log in and see how I did, or try it yourself.
Museums are one of the purest expressions of designing information to educate, inform and entertain at the same time. This exhibit uses three of the most important tools to get you to pay attention and remember:
- Social proof: you want to take part and answer the questions because you can see everyone else is
- Attention : forcing the user to answer questions on the issues engages you in the material and frames the sometimes archaic documents presented in the exhibits
- Repetition, repetition, repetition: the best way to make a fact memorable is to repeat it, ideally in different media. With the original documents, the interactive booths, the website and the online access to your voting, Taking Liberties has it covered.
At the heart of this is the well executed technology; read on to see how it was done… Read the rest of this entry »
December 31, 2008
How do we remember? How much of our memory is linked to places, times and people?
Dominic O’Brien was the first World Memory Champion, and holds is in the Guinness Book of Records for memorising and recalling 54 shuffled packs of playing cards. Who better to explain one of the more common mnemonic techniques of placing the elements to be remembered along a journey and then to imagine yourself physically walking it. In his own words (about a shopping list):
“To remember the list, “place” each item of shopping at individual stages along a familiar journey – it may be around your house, down to the shops, or a bus route.
For these singularly boring items to become memorable, you are going to have to exxagerate them, creating bizarre mental images at each stage of the journey. Imagine an enormous, gulping fish flapping around your bedroom, or for example, covering the duvet with its slimy scales. Or picture a bath full of margarine, every time you turn on the taps, more warm margarine comes oozing out!
Later on, when you need to remember the list, you are going to “walk” around the journey, moving from stage to stage and recalling each object as you go. The journey provides order, linking items together. Your imagination makes each one memorable.”
From his book, How to Develop Perfect Memory (read it on Scribd).
Even better is this video.
The making of memory
This kind of technique is not recent. Steven Rose is a leader in the study of memory, and in his book the Making of Memory tells a story about how the opportunity to train it (mnemnotechnics) was first recognised:
“Within western culture, there is a clear history of this mnemotechnic tradition, running back to Greek times, though the written record of the method is not Greek but Roman, and first appears in De Oratore, a famous text on the art of rhetoric – that is, of argument and debate – by the Roman politician and writer Cicero. In it, Cicero attributes the discovery of the rules of memory to a poet, Simonides, who seems to have been active around 477BCE.
The Simonides story appears and reappears throughout Roman, medieval and Renaissance texts. In its basic form it tells how, at a banquet given by a Thessalonian nobleman, Scopas, Simonides was commissioned to chant a lyric poem in honour of his host. When he performed it, however, he also included praise of the twin gods Castor and Pollux. Scopas told the poet he would only pay him half the sum agreed for the performance and that he should claim the rest from the gods. A little later Simonides received a message that two young men were waiting outside to see him. During his absence the roof of the banqueting hall fell in, crushing Scopas and his guests and so mangling the corpses that their relatives could not identify them for burial. The two young men were the gods Castor and Pollux, and they had thus rewarded Simonides by saving his life, and Scopas apparently got his comeuppance for meanness. But – and this is the crucial bit of the story – by remembering the sequence of the places at which they had been sitting at the table, Simonides was able to identify the bodies at the banquet for the relatives.
This experience, as Cicero tells the story, suggested to Simonides the principles of the art of memory of which he was said to be the inventor, for he noted that it was through remembering the places at which the guests had been sitting that he had been able to identify the bodies. The key to a good memory is thus the orderly arrangement of the objects to be remembered.”
From The Making of Memory.
He goes on to describe how this culminated in the Renaissance with the popularisation of “memory theatres” – literally theatres in which you would imagine yourself on stage with the elements you were trying to remember in the audience.
“By the time of the Renaissance, the memory theatre was turned from a symbolic device, a piece of mental furniture, into an actual construct. In the sixteenth century, and to the disapproval of more rationalist philosophers such as Erasmus, the Venetian Giulio Camillo actually built a wooden theatre crowded with statues which he offered to kings and potentates as a marvellous, almost magical, device for memorizing. “
The breaking of memory
So why mention all this?
If there is one thing these techniques all rely on it is giving a context to the thing being remembered.
And that is exactly what is lacking from one of the big leaps of the web: tagging. The danger of tagging as a way of remembering is that it breaks all our thoughts into tiny snippets which are devoid of context. Normally, we make sense of the world by constantly updating our inner mental map of the people, places and things around us. We also surround ourselves with our crutches, of which the humble notebook is a perfect example. It stores information on a chronology, which we are good at quickly running through. On top of that, it somehow captures a surprising amount that can later jog our memory: the pen we were using, the messyness of the writing (were we at a desk or out and about?) or simply the random doodles in the margin. The main aspect of these crutches: they have a structure we can envision and navigate.
The problem comes when we throw information into something with an unstable, emergent structure like del.icio.us. I use and love del.icio.us but am keenly aware that I sometimes prefer to dump links into a note, draft blog post or e-mail it to myself because I know that whilst del.icio.us aggregates it won’t necessarily help me to get it in order.
A challenge to designers
As we move our lives on to the web, our tools will need to help us efortlessly capture the context of each file, photo, message and thought that we upload. The aim: to make our online tools as flexible and fast as possible while still giving them the ability to help us organise our thoughts.
To do this we will have to use every trick in the book, but here are four main themes (with the way we store photos as an example):
- Using technology to capture the context (some digital cameras now have built in GPS, even for consumers)
- Using the wisdom of the crowds to help us annotate (in the same way that Photosynth matches pictures by their contents to find where they were taken)
- Giving the user intuitive, fast tools to mold and organise what he enters (www.stixy.com is not perfect, but has some elements of this fluidity)
The winners will balance the structure we impose with the structure that emerges from the context of the elements we upload.
I can’t wait to use it!
- The Making of Memory (Steven Rose)
- Metaphors of Memor (Douwe Draaisma) – first chapter here
December 17, 2008
If a picture is worth a thousand words, what happens when your computer can give those words a rewrite, mix them with other passages and generally treat them like newspaper scraps on a ransom note?
Eventually we won’t need a file system to browse our pictures – we will probably virtually walk through the places we’ve been with our photos like paintings on the wall of a gallery (with those of others there too). We are also getting closer to the day where images can be manipulated at a higehr level than the manual, pixel by pixel way we are used to: “I’d like a field with four golfers in it, overlooking the sea. Great, just pan left 30% and make the grass greener. Done.” With a large enough archive of images and the kind of technologies below, that scenario is not far off.
Here are a couple of great examples which can already be used and one more on the horizon.
“You familiar with Photosynth?”
“Yes… Taking a large collection of photos, analysing the similarities and displaying them in a reconstructed three dimensional space?”
“Exactly. Build me a high school gym.”
Developer conference? Nope. This is from Microsoft Photosynth’s airing in an episode of CSI earlier this year. As Microsoft put it, Photosynth is a perfect example of a tool creating a 1 + 1 = 5 scenario, where the thousands of pictures uploaded to the likes of Flickr can be combined to create a seamless three dimensional environment. Try it.
The cool news? The technology on which Photosynth is based just hit the iPhone. It’s called Seadragon, was built by a small Seattle area startup acquired by Microsoft in 2006, and allows you to seamlessly zoom in and out of a gigapixel scale image. Try it out in your browser or check out the iPhone demo.
Also check out Blaise Aguera y Arcas’s groundbreaking demo of Photosynth at TED 2007
…and on CSI!
Seam carving: resizing no longer considered harmful.
This is an awesome technology originally developed by Israeli researchers Shai Avidan and Ariel Shamir. When you resize an image normally, you squash everything in it. With seam carving you only touch the parts that matter least. In other words, the golfers stay fat, but the sky and grass around them gets progressively removed.
This video by the researchers is the best way of getting a handle on the possibilities: the principle of seam carving also makes it possible to enlarge images without stretching them and selectively remove parts of a picture (e.g. you could pick a golfer and make them disappear more convincingly than traditional techniques). Check it out below. The paper also makes good reading.
Talking of Adobe: Infinite Images
On a continuing theme of algorithmically creating new images from the old, Adobe are playing around with a tool which takes any picture, finds other similar pictures and seamlessly stitches them into an infinitely pannable and zommable virtual environment.
Essentially, this is Photosynth, but instead of sticking to images of the same thing it grabs anything that fits the bill. Grainy footage from Adobe MAX 2008, via ReadWriteWeb.
What will happen when anyone can mash up two images and create a picture of a place that looks absolutely real? What other technologies are getting us closer to this world?
December 7, 2008
Charts can be the quickest way to get a handle on an unmanageably large amount of information, but only if they are presented right. Here are three places to inspire you.
1 – The Economist daily chart
Did you know that the Economist publishes a new chart everyday on on… well, anything?
Check it out here: http://www.economist.com/daily/chartgallery/
2 – Hans Rosling, master manipulator of the world… of data
“Rosling began his wide-ranging career as a physician, spending many years in rural Africa tracking a rare paralytic disease (which he named konzo) and discovering its cause: hunger and badly processed cassava. He co-founded Médecins sans Frontièrs (Doctors without Borders) Sweden, wrote a textbook on global health, and as a professor at the Karolinska Institut in Stockholm initiated key international research collaborations. He’s also personally argued with many heads of state, including Fidel Castro.” From his TED biography
Hans Rosling created a tool called Gapminder (www.gapminder.com, now acquired by Google) to bring to life the world’s information so it can be used to more cleverly solve the world’s problems. The presentation above from the TED conference in 2006 is one the first airings of his amazing tool, not to mention his brilliant presentation style.
3 – Edward Tufte – guru of data
Edward Tufte is a legendary statistician and guru of information design. Above is a chart he famously described as “probably the best statistical graphic ever drawn.”
Another notable piece of writing; how did Powerpoint kill the seven astronauts who were on the shuttle Columbia when it disintegrated on re-entry on the 1st February 2003? Click here for the answer.
His website is packed with thoughts and an active community trying to find better ways to present the world around us: http://www.edwardtufte.com/tufte/index
4. Read charts.jorgecamoes.com
I know, I know, I said three ways. Sue me.
What else inspires you?