Categories
Technology

First impressions of Tiger…

A few first impressions of Apple’s upcoming Operating System: Tiger:

  • Spotlight – basically an Apple native verstion of Launchbar with a few self-evidently nice features (searching Mail). After using Launchbar for a while I can state without question that it feels like part of the OS after a while, so if the Apple version has the similar ability to run it without using the mouse, then I’ll be pretty happy. I doubt Objective Development will be, of course…
  • iChat AV – the new version allows you to conference call ten people and video chat with loads of people too. That’s pretty cool. I’m pretty bloody impressed by that. It’s going to be pretty awesome and totally what I’ve been hoping they’d do. It fits in really well with some other thinking I’ve been doing…
  • Safari RSS – sites with RSS feeds are marked when you go to them, which is cool, and then you can subscribe to them directly through the interface. I’m not convinced by their interaction design here – I suspect I’ll keep using NetNewsWire for the time being – but it’s certainly a positive step that can’t help but make RSS penetration…
  • Dashboard – basically this one is a fucking rip off of Konfabulator and I’m pretty pissed off about it. I’m not a particular fan of the paradigm, but I don’t think that’s pertinent – unlike RSS feeds and the Launchbar knock-off there doesn’t seem to me to be really any reason for building this into the OS, it’s not particularly interesting or powerful and it really does seem just like nicking someone else’s ideas and lobbing them into your Operating System because you can. Harsh, Steve. Harsh…
  • Automator – as far as I can tell this is a GUI for AppleScript, allowing people like me who get scared by even the simplest of scripting languages to automate tasks quickly and easily without ending up dribbling into a cup. My personal jury is out on how useful it’ll actually be on a day-to-day basis but that doesn’t mean that I’m not impressed. AppleScript for the rest of us?
  • VoiceOver – a spoken interface for the Mac. I’m not really qualified to comment on the technology, but certainly the aspiration is good and important and I don’t doubt it’ll serve Apple well in cracking governmental markets. My only quibble – I’m not so impressed by the little white man they made in Illustrator to put in the logo. He looks a bit lame and … bendy …
  • .Mac sync – a revised version of iSync with a simpler UI and apparently some developer hooks. It still pisses me off that you have to have a .Mac account to do any syncing across the internet. I can’t quite believe that function is worth the $60 a year, nor do I think it’s even vaguely conceivable that you couldn’t explain to someone how to set up a server to handle that stuff themselves… Seems like a slightly shitty attempt to drill money out of you in an entirely random way…
  • Better Unix – neat! I think!
  • XCode 2.0 – wish I understood it!
  • System Wide Metadata Indexing – this is seriously cool and API’d up the wazoo so hopefully it will start to lay the foundations of the Finder technologies of the future…

All in all the big news for the operating system is the integration of search and metadata technologies into the heart of the Operating System. The Safari RSS and iChat AV stuff is pretty cool too and everything else looks like tweaks, gimmicks or outright rip-offs. I wonder when it’s out?

Categories
Design Net Culture Radio & Music

Developing a URL structure for broadcast radio sites…

One of the most common questions I’ve had about the Radio 3 redesign work that we’ve been doing has been about the URL structures that we have used to identify individual episodes of individual programmes. I’m really keen to address these questions with a full and maniacally over-detailed post because I think the issue of how we map broadcast programming to web URLs is a really interesting one, and because I think we’ve done some good work here that other people might find useful or interesting. Drew McLellan writes:

I see URLs like /radio3/showname/pip/randomcode which, as I understand it, would require a user to locate a particular show through the site’s navigational system. It looks like there’s no way of guessing a URL. Is that right? What’s ‘pip’? That makes no sense to me. My preference for date-based material is a path with the date in it – like /radio3/showname/2004/06/27/ Is there a reason why a URL format similar to this wasn’t chosen?

So the first thing to explain is that Radio 3’s new site is particularly interesting and ground-breaking because it doesn’t just have a page for every broadcast, it has a page for every episode. This is way cooler than having a page for every broadcast, but the full implications of it aren’t immediately easy to digest. Basically it means that there would only be one page for any documentary no matter how many times that documentary is repeated. That one specific page then becomes the definitive home for that episode of that documentary on the BBC and all subsequent information or supplementary material that is relevant to that episode can be stuck onto that page at any point in time. Imagine it as being a bit like having an entry in IMDB for that particular radio episode. It’s like creating the basis for an ever growing encyclopaedia of Radio 3 programming, and it should make it really easy to search for information about a programme without getting overwhelmed by dozens of versions of the same page, each containing little odds and sods of information, none of which are aware that they’re all talking about the same thing.

Having said all that, lots of programmes don’t ever get repeated on Radio 3. Let us take as an example, “Morning on 3”. This is basically the equivalent of the DJ-led shows that we’re all familiar with and which are common to radio networks the world over. These things are just broadcast live. That’s the whole point! It wouldn’t make any sense for it to be repeated. Some of the music on it will clearly be repeated – just like any popular music radio show, but the programme itself will not. For programmes like “Morning on 3” Drew’s URL structure (which is familiar to all of us who run weblogs) would work perfectly. You can imagine very easily getting to today’s episode of Morning on 3 via the URL bbc.co.uk/radio3/morningon3/2004/06/27/. That would be the perfect weblog-like kind of programme, where every individual entry/episode could only be connected to one moment in time.

But if wouldn’t work if they programme ever got repeated. By definition a programme that gets repeated has been broadcast on multiple occasions in time. Imagine a programme that was originally broadcast on June 27th 1985 and which is then repeated the following evening and then again nineteen years later (tonight). What would be the date-based URL for a programme like that? Well one approach would be to go for the date on which it was first broadcast. But what’s the experience of that for a user? They’ve gone to a schedule page for today (say) and they’ve clicked on the link to a programme that’s on this evening and found themselves with a URL from 1985. A plausible reaction would be to think that you’d got lost somewhere along the line and were on the wrong page. How did I end up here?. This situation gets worse when you consider that since we started capturing programmes on the 4th of June, any programme that was originally broadcast before that date would be assigned a URL based on a fairly meaningless broadcast date…

So, a date-based URL structure would work fine for programmes that never get repeated, but wouldn’t work very well for any programme that did get repeated. Immediately, we’ve got a problem then, because even though 99.9% of the time we know that “Morning on 3” won’t get repeated, we can’t exactly guarantee it. Just recently on the BBC we’ve had an unedited re-broadcasting of the live coverage of the 1979 General Election and the daily re-broadcasting in real-time of the Home Service’s commentary on the D-Day landings. So even those topical programmes we’ve talked about could quite easily be repeated.

But let’s pretend for a moment that isn’t too much of a problem. Let’s also pretend that we can easily distinguish between those programmes that almost certainly won’t get repeated on the one hand (and say they might work with a date-based URL structure) and those that very easily could or will get repeated on the other (say anything that’s pre-recorded before it goes out on air). What kind of URL structure should we use for the latter?

One obvious and simple answer is that we should use episode numbers. The Radio 3 show Composer of the Week is broadcast each weekday around lunchtime and then is repeated the following week at midnight. This means that there are two episodes broadcast on each day (another place where date-based URLs might get confusing or seem broken). If we used episode numbers, however, that wouldn’t be so much of a problem. So you can imagine the URL being something more like bbc.co.uk/radio3/cotw/episode/2345. This would allow you to predict sequence and order and would make the URL structure nice and hackable by users. Except then you have to think about what you should base that episode number on. Should you base it on the definitive numbers for that episode – ie. the ones that the makers of Composer of the Week use? How should you source that number? Do you trust that numbering scheme to be consistent and reliable? On the other hand should you start with an arbitrary number? And what happens if your system for determining repeats isn’t fool-proof and you accidentally assign the wrong number to an episode at some point? The worst eventuality would be that you end up with episode numbering schemes that start to wander out of sync with one another because someone pulls and episode or a schedule changes. And then you get gaps in your URL structure, or programmes out of order. Imagine a circumstance where after six months of perfect running you accidentally pick something up as being a repeat when it isn’t… Suddenly that episode has to be reinserted into the scheme somewhere by hand, or you have to change the URLs for any episodes that have been made into pages before you realised. The URLs break or what they point to change, and that whole part of the site stops being human hackable or readable and starts becoming institutionally and forever broken.

Or you could do it by subject for some of the URLs. Again – Composer of the Week is broken into five part weekly chunks. You could have a URL structure for programmes like this which highlighted those divisions: bbc.co.uk/radio3/mozart/part/4 or bbc.co.uk/radio3/mozart/4. Here the problems are potential URL length and namespace issues. And while they might remain human-readable, they’re not machine predictable in any way. So even this kind of URL structure has its problems.

I want to make something clear at this point – each one of these URL schemes could have worked very nicely for that particular kind of programming. But in the end that’s not enough. Because fundamentally as soon as you’ve decided to use different URL structures for different kinds of programming you’re immediately in trouble – because radio programming isn’t a static thing, it changes and evolves – an individual programme brand (say Choral Evensong) might change format, change frequency or be cancelled. Another programme might be created with the same name ten years later. And each week there will be a number of specials and one-offs and schedule fillers (this week on Radio 3 there were around seven one-offs, including tonights zeroPoints) as well as regular short-series or new brands. Suddenly there’s a time-consuming and fairly-skilled job that has to be undertaken every day – which URL structure should this new programme use… And you’re never going to be one hundred percent correct. And so pages are going to be moved and URLs break and all hell will break loose…

Which brings us to the URL structure that we went with in the end and the rationale for it. Our first principle was that in order to stop URLs breaking and to stop the possibilities of human error in assigning URL structures to brands incorrectly (and to deal with the possibility of random repeats et al) the URLs should all follow exactly the same structure. Fundamentally, this meant that date-based URLs had to go out of the window straight away because they weren’t suitable for every episode of every brand. The only URL structure that we could identify that didn’t actually break in any circumstances is one that’s based on an episode number or identifier of some kind. After careful consideration we decided that we didn’t want to give the impression of human readability or order or structure where that structure was inevitably likely to be broken or flawed or mismatched with other identifiers. And we decided that whatever additions to the URL that we made had to be short – it had to be able to be appended onto the end of a brand name without sprawling out of control. More importantly still, we decided that it shouldn’t break any naming conventions already used around the site or make the site harder to maintain.

Which is where ‘pip’ comes in. We’d already decided that we didn’t want to have the episodes sitting in the top directory of the brand. We’re in this for the long-term, and we wanted to make sure that we could guarantee that whatever future changes were made to the content management of the site, however many new things or features were added to it, we’d never have collisions between these features and the episode pages. We decided to place all episode pages into a subdirectory, and after much discussion of what that should be called (episodes – too long, not always an obvious term for a news programme / eps – too likely to already be used and too close to the name of a file format for us to be sure that it wouldn’t overwrite anything at any time in the future etc) we eventually decided to stake our claim on the directory name /pip/ meaning (if you really want to know) nothing more than ‘programme information page’. [PS. In a few weeks time, this directory should contain a list of all the episodes for each brand, meaning that you can hack back the directories and keep going up a level in the site heirarchy from individual episode to all episodes to brand to network to broadcaster.]

With the final part of the URL – the episode number itself – having taken into account all the problems that we might have with sourcing and guaranteeing the integrity of the ‘definitive’ numbers for any given series of programmes, and having considered the problems associated with any and all possible bugs that might emerge (what if two random programmes started to be considered as repeats of each other and had to be broken apart – what URLs to give them? What if the programmes were broadcast out of sequence oor we started running the site halfway through the broadcasting of a run and had to move around the episode numbers later etc) we came to the conclusion that the actual episode number should be a non-human readable short code. After much deliberation we came to the conclusion that a five-character alphanumeric hash would be short enough to not break URLs in e-mail and long enough to give us up to 60 million different identifiers. And of course we’ve kept it as a directory level URL to future proof the URLs against changes in the technology that we’ve used to build the site. (You’ll notice some index.shtml’s around the place, but we’re going to clear that up).

The alphanumeric short code that we’ve got now also opens up a whole range of new possibilities. Because these identifiers are unique across all of Radio 3, we suddenly have a way to point to (and potentially manipulate) every episode that’s broadcast on the network. We’re still looking into the various affordances that this identifier might provide us with and we’ll let you know what we come up with.

So – in summary – we have a URL structure that is eminently suitable for dealing with the breadth and wealth of programming that could come out of a radio network – a URL that will shortly be totally hackable to the extent that each and every level of the directory structure will contain content appropriate to its place in the site’s structural heirarchy ( broadcaster / network / programme brand / episode list / individual episode), and which is human readable as far down its length as is practical. Drew’s quite right – in order to guess the URL for an entry you do need to use the site’s inbuilt navigational systems. However, it’s almost impossible to be able to build URLs for radio programming that are completely human guessable and as reliable and stable as we’re determined to make them.

We’re thinking five to twenty-five years in advance here, making sure that the URLs of pages about radio programmes on Radio 3 could conceivably last as long as the web does. We’re in this for the long-haul…

Categories
Design Radio & Music

The new Radio 3 site launches!

Ladies and Gentlemen, it gives me great pleasure to direct your attention towards the new Radio 3 website, which I (along with a great number of other people from every discipline and from all across the BBC) have been working on for the last few months. The teams that created the site have been among the best I’ve ever worked with and if started naming names I’d be here all week.

But what’s so special about it, I hear you ask? Quite apart from the sterling design work from Paul Finn, we’ve been working with Radio 3’s team to make the site one of the most genuinely web-native sites I’ve ever seen – designed to effectively reflect the station’s programming online in a way that’ll be better for the site’s current users, for search engines and for anyone who would want to link to the site – including (but certainly not limited to) webloggers. Specifically the new site includes:

  • A web page with a stable long-term URL for each and every episode of each and every programme that is broadcast on Radio 3 – a page that will always have basic information upon it, but can also be supplemented with more content by the production teams that actually make the programmes. (Lebrecht.live, World Routes’ “Cairo Nights” etc.)
  • New schedule pages that are persistent and will remain on the site in perpetuity, each item upon which linking through to the specific episode page for that programme – allowing you to navigate to any episode of any show by the date and time it was broadcast upon (particularly useful for helping you to find out what was playing yesterday when you were listening to the radio in the car).
  • Better navigational aids, including the ability to easily see when the next (and last) episodes of your favourite programmes are on, the ability to navigate between episodes of a programme by date, and a full daily schedule on the front page of the site, linking through to every episode.
  • Improved URL structures, easily spiderable pages and nice content-related title tags that should make each page easier to bookmark and find through the BBC’s search engines and search engines across the web.

I could go on – I’m terribly proud of the work that everyone has done on the site and it’s only going to get better over the next few weeks. But good work be damned! The most important thing is that I think it’s going to serve the site’s users better – both existing, and (perhaps) people who’ve never listened to Radio 3 before and can now be exposed to its wealth of programming over the web more effectively than ever before.

PS. Hello to Leigh, Justin, Andrew, Gregory from Radio 3, Paul and Sarah from the Technology and Design team and everyone else who worked on the project: Zillah, Rija, Tim, Mike, Matt B, Paul C, Manjit, Ian, Jason, Tony, Clare, Dan, Webb, Chris K, Simon N and anyone else I might have forgotten about. And a special personal wave to Margaret Hanley and Gavin Bell for being the best creative partners and co-conspirators a boy could wish for. You all rock!

Categories
Random

Two questions about stress and work…

Weird day. Really good bits. Really tiring bits. Feel slightly washed out and exhausted. Can’t tell if I’m a big drama queen or not. Met the new Director General of the BBC today. Launched a site and stuff. More on the important bits tomorrow. Question of the moment: Should you be stressed in stressful situations or should you be calm through them? Personal theory – stress gives you power and if corralled and put into your service is fairly useful. Not sure I’m correct. If I’m wrong, I’m going to have to restructure my belief system a lot. Other question: What do people do when they’re not working?

Categories
Random

So very near…

Guess what! Light at the end of the tunnel! Give me a few more days… Just a few more days… Then I can do stuff again! Like write things on my weblog, look after my community, take days off, have a haircut, clean up my flat, organise a holiday! Just had my first weekend in about two months where I don’t feel under pressure or stressed, and it’s practically over and I don’t feel anywhere near calm or relaxed, but I feel a hell of a lot better than I did this time last week and I’m thinking that maybe it’s the beginning of a less frantic period. I’m looking forward to being part of the world again.

Categories
Random

Anyone for Gmail?

So anyway, I signed up for Gmail a while back and basically I don’t really use it even though it’s pretty well assembled and has some nice features and now I’ve been given three invitations but everyone who I know who wants an account already has one. I imagine the big craze for offering people enormous gifts and proposing amorous liasons in exchange for One Gigabyte of Full-On E-mail Pleasure has passed, but if you can think of a reason why I should invite you rather than just making three new accounts with funny names, then post a comment below or chuck me an e-mail to my fairly guessable normal e-mail address (hint: it starts tom@) and I’ll see what I can do…

[Update: I’m afraid all three invitations have now gone. As soon as I get any others I will post them up and people can make a case for why they should get one. Sorry if I didn’t send one to you. It’s not personal…]

Second update: I got another five invitations this morning and have given away four of them to some of the other people who contacted me about them either on this post or by e-mail. That means I have one more left if anyone out there wants it. Yet again – make a good case and I’ll give it to you. I’m adding one condition now – if you get an account and eventually get the ability to give out invitations, you have to give away at least one of those invitations to someone who says something funny on the internet. You can’t give em all to friends. I have no way of enforcing this, of course, but I will consider you a person without honour if I found out you have wilfully ignored your obligations.

Categories
Random

On representing the backlog caused by an absence of cerebral RAM…

That period before a launch is always stressful. This time is no exception. It’s occupying my entire head almost 24/7 no matter whether I try and leave work on time or whether I’m there for twelve or fourteen hour days. It doesn’t make any difference. It’s just there in my head and it probably will be until a couple of weeks after it’s finally launched. C’est la vie. It’s the nature of the beast.

In real life, of course, people can sense when you’re busy and don’t feel particularly upset if you aren’t able to give them the time that you would like to. They might not be thrilled about it of course, but they understand. But the signals that I can give off in public through my weblog are less clear. Has he just abandoned the thing? No. Why doesn’t he have anything interesting to say anymore? Well, I do! Probably more than ever at the moment. I just can’t find the headspace to work with to write them down. Why isn’t he commenting on that thing that’s so obviously one of his core interests? Well, it’s because I’m not commenting on anything – the only creative thing I’m able to do outside work at the moment is doodle in Illustrator.

What I need is some way of actually ambiently reflecting my personal weather – without all that clunkiness of actively choosing states of mind. What I actually need is some way of representing that I’m just really really behind… A first suggestion – some way of representing the number of unread posts I have in NetNewsWire at any given moment (currently way over six hundred). Except that my path of posting tends to be more circuitous than that. NetNewsWire posts get opened in browser tabs if they look interesting, read thoroughly and then (if they’re not something I want to follow-up upon) they get immediately closed. The number of open tabs reflects pretty much exactly the number of things I actively want to talk about at any given moment. If there are lots open, it probably means that I have a lot I want to write about and no time to do it in. Except that doesn’t work either, because in addition to the six hundred things in NetNewsWire I haven’t filtered and the fifty tabs I have open at the moment, I also have four folders in my bookmarks called “State of Play 1-4” that were the sum total of all the things I wanted to talk about and had open in Safari but then had to store quickly so that I could install a Max OSX update. That’s another two hundred discussions I really want to get involved in – that I want to contribute to. And then there’s the four or five little projects I have on the side that I’ve been trying to write up but have been incapable of doing so.

So six hundred unfiltered posts, fifty open tabs representing fifty filtered posts to talk about, two hundred bookmarks representing two hundred even more filtered conversations to get into, plus four or five multi-page documents (one around 6,000 words) that have been growing in the sidelines that I’m unable to push out into the world in any effective way. That is the index of how busy and behind I feel. That is the measure of my total absence of cerebral RAM. Do you now understand why I’m not posting that much?

Categories
Technology

Notes from NotCon: Hardware

Well, anyway, since I’m up I may as well finish off my coverage of Sunday’s NotCon. After the Geolocation panel (my notes), I joined the Hardware panel. Over the entire day I self-consciously avoided all the political panels because they just looked like they’d be incredibly frustrating, confrontational. Ironically I decided that I wouldn’t find the Blogging panel quite as annoying, but more on that later in the day… The Hardware panel comprised of talks by James Larrson, Steven Goodwin, Matt Westcott, George Wright and Anil Madhavapeddy and was a really mixed bag of the sublime and ridiculous.

There’s something uniquely nostalgic about British geeks – their fetishes for the computers of their youth (the BBC Micro and the Sinclair Spectrum in particular) seem to overwhelm their future-thinking impulses time and time again. I can’t say that I’m convinced that this is a good thing – it makes me wonder about how British geekhood views its own chances of creating new technologies that actually can push things forward. Maybe they feel it’s just not possible any more? Maybe they think no one will take them seriously…?

That’s not to say that Matt Westcott’s illustration of new hard and software trends on the Spectrum isn’t impressive or entertaining. He illustrates connecting the tiny computer to hard disks and compact flash, talks about the demo scene and the “only project on sourceforge for the Sinclair Spectrum”. He ends up with a streaming video version of the Chemical Brother’s Let Forever Be video (directed by Michel Gondry). All good fun – I just can’t help but feel that it’s a little bit of a waste of a talented man’s time.

James Larrson’s piece was similarly random – but here at least the whole thing was clearly a bit tongue in cheek, and his presentational skills were so good that someone should really give him a TV-series of short introductions to crackpot inventors. He’d be awesome. The project he was talking about was based around using a BBC Model B from 1982 to measure the changes in state of the mayonnaise, bread and prawn components of a Marks and Spencer prawn sandwich – and using that to tell the time. I’m not going to go into too much detail except to say that he’s managed to get the accuracy so good that now the clock only loses/gains up to four hours in any given day.

I didn’t get the name of the next guy – I think it was the Reverend Rat – but he was showing how you could radically extend the range of Bluetooth devices. Apparently by soldering it together with an antenna he’s extended the range from ten metres to the rather more satisfyingly non-personal 35 miles (and more). His main planned use for this particular piece of tech seemed to be to stand on top of Centrepoint jacking into passer-by’s phones. Or that could have been a joke. Funny chap. Cool though…

Then we got to the three talks that were actually about the way technology might evolve: Steven Goodwin’s piece was on hacking around with your house and TV to allow you to control things long-distance (including recording TV on demand and stream it back to your computer via – I think – e-mail), which wasn’t really particularly new in principle but nice to actually hear from someone who’s doing it throughout their home. [If you’re interested in this stuff, then the O’Reilly book Home Hacking Projects for Geeks could be a good read.]

Then George Wright talked about Interactive TV, why it wasn’t the web and why that’s a good thing (in his words). The language he used about the platform’s restrictions (no return path in many cases, exhaustive centralised testing on the platform required before it any product can be rolled out, no literature to support development, completely limited to broadcast companies etc) doesn’t fill me with hope for the future of iTV – particularly when compared to the possibilities of the future ever-present fat-piped non-broadcast-limited, massively flexible and responsive web – but he did make a good case for convergence not being the point. We’re still talking around this stuff behind the scenes and I’ll let you know if we come up with anything interesting.

And finally – and my particular favourite of the session – Anil Madhavapeddy talked about using camera phones as ubiquitous remote controls / mice. There were some lovely aspects to this – the ‘ooh / aah’ bit coming when he demoed applications with ‘robust visual tags’ that look a bit like the 2d bar codes that the camera phone could recognise and manipulate. So you’d come up to a some kind of public terminal, turn on the camera phone, arrange it so that you could see the control you wished to manipulate on the phone’s screen, and then press the equivalent of a mouse button – at which point the control on screen could be moved around just as if your camera phone was a mouse (via Bluetooth or Wifi, I assume). It sounds over-complex from this introduction, but some of the immediate benefits were clear – the same tags could be used as static encoders of commands in paper interfaces that you just printed out, there’s a built-in mechanism for manipulating money via a mobile phone that opens up lots of possibilities for exchanging or buying things, etc. etc. I’m going to be keeping an eye on this stuff, it was fascinating…

And that’s pretty much all I have to say about the Hardware panel at the moment. I have to head off to a thing at the RAB on the “21st Century Radio Listener” for work. I’ll talk about the next session on MP3s and Mash-Ups later in the day…

Categories
Random

A snapshot of my neighbourhood…

6.30am: The Police call around and get me to open the front door. They knock on my neighbour’s door and ring the doorbell a couple of dozen times. Looking out the back window, the entire block is surrounded by plain clothes policemen. My neighbour doesn’t answer. This probably means that their son has beaten his girlfriend to a bloody pulp again.

Categories
Technology

Notes from NotCon: Geocoding

So the first panel of the day is over and we’re not waiting in the over-crowded downstairs for the Matt Jones– hosted “Hardware” panel to get going.

My initial reactions to the Geocoding panel were extraordinarily positive – the first project that people talked about was called Biomapping and it was a fascinating concept. Basically the guy talked about using a galvanic skin response detector attached to a GPRS device to start plotting individual reactions to the environment around them. Totally fascinating. Then followed Nick West from Urban Tapestries (note based geo-annotation on mobile phones), Earle Martin from Open Guides (wiki-based open city guides) and a clump of people from Project Z, who seem to spend their lives creeping around in places where they shouldn’t, taking photographs and leaving Indymedia logos. Pretty cool stuff.

I tried to ask a question during the event, but unfortunately was shut down by Chris Lightfoot. For those who are interested, I wanted to know whether or not any of the geo-annotation systems (including but not limited to the Open Guide wikis) were building in any protection against spamming at the architectural level. So many useful and potentially valuable projects in the past have ended up with fundamental problems with spamming (including e-mail and weblogs – and now wikis), the last thing we need is to have a standard of annotating the earth and all things around us that is going to be overwhelmed with adverts for prostitutes, scams, drugs and vouchers for Starbucks and McDonalds.

I’ll try and post up the full SubEthaEdit notes from the first session later in the day. No promises though…