We talk to Ian Forrester, a Senior Firestarter at BBC R&D, who has been developing innovative adaptive podcasting technology and working on its potential applications alongside artists and creative professionals from the Rabbit Holes Collective. Through their work, the collective aims to ‘constructively disrupt’ and reimagine possibilities for digital collaboration by developing ideas in immersive and sensorial tech.
We have so much to talk about; adaptive podcasting and your work with the Rabbit Holes Collective, but before we begin, I wanted to ask you to introduce yourself and tell us a bit about your journey into becoming a Senior Firestarter at BBC R&D. How did you get involved in the digital scene?
I’ve always been interested in design and technology. My journey started when I was very young. I couldn’t afford an expensive computer, so I learnt how to make it. I built my own, and this is how it all started. I went to college in Bristol and did an MD in Computer Graphics and Graphic Design and then a BA in London in Interaction Design. Initially, I wanted to be an architect, but I realised that architecture was a seven-year course, where graphic design course was only six-years, so I thought that’d be better (laughter).
When I first started working for the BBC, I became an XSLT developer, but it was clear that I wanted to do more than just development; I wanted to work on new cutting edge projects. Innovation is where my heart is.
One of your work’s primary objectives with Rabbit Holes Collective is to enable young peoples’ access to object-based media technology and, through that, empower them in following their intuition and curiosity. I wanted to go back to the story you just told us – about how you got into working with tech – precisely because of your lack of access to it. Can you elaborate on that experience and how that way of approaching obstacles as opportunities has informed your work? Or maybe let’s start from the beginning. How did young Ian go about finding out how to build his computer?
It was a long process! I begged my parents to get me a computer, and they did, but unfortunately, they bought a computer that couldn’t do much. It got me started, though, and that was important. From the very beginning, I always maximised the use of it. I used to leave it on for days and days, ray tracing 3D scenes because it was such a slow machine. When, at the time, PCs were becoming popular, it became clear that there was no way my parents could afford one, and it would be unfair for me to ask for one. I also always felt like many people didn’t make the most use of these tools or technologies around them. I do not like the wastage of technology, especially since I’ve often been in a position where I couldn’t afford it.
To answer your second question about how that mindset informed my work, I think that having limited possibilities resulted in my desire to maximise the use of things that most people would buy and be satisfied with using 1% of its potential. For the same reasons, I gravitated towards the Open Source community. I could never afford Microsoft Office, so I found alternatives. Yes, I could pirate it, but I’m not going to do that when there’s a legitimate alternative that is as good or maybe even better. Learning to use those alternatives is something I felt was important and valuable to the next generation; it is a way of challenging our capitalist mindset and mindlessly going with the big brands.
One of the biggest obstacles when it comes to creative work with young people is encountering frustration that stems from a lack of funding, resources etc. It is fascinating because we see time and time again how many ideas find roots in managing those deficits and, subsequently, finding creative, innovative ways of working around them. I think what you’re proposing here is such a beautiful way of thinking about it – not focusing on the lack of, but on what is available, constantly asking ourselves, “what’s in my toolbox?”
And it isn’t just about your toolbox; it’s about your networks and communities. When I got involved in the Open Source movement, it opened countless doors for me. It allowed me to use software that other people had written and be an active participant in driving that software’s improvement. We would feedback bugs to each other, talk about possible new features. Everyone would help out in some way. I think that kind of collaborative, generous mindset is so important.
Can you briefly explain Open Source and how it works? What’s its role in the current digital ecosystem?
Open Source, in short, allows you to apply open-source licences to software. This means you can share the application and its source code with other people. For example, I’m using this programme I love called Inkscape as an alternative to Adobe Illustrator. Because it is Open Source, I can install Inkscape and get the code for it, write in a new feature, or suggest one to the community.
Being a part of a community comes with opportunities and responsibilities – this often gets missed out of the conversation about open source. Many people use the code or get the software and then forget about the community, but that’s where progress starts – it stems from that constant knowledge sharing and critical feedback.
There is also a free software movement, and that is a bit different. It’s free software, but it’s got a different set of licences that requires you to return it or provide it to other people when you make a change. With Open Source, I can distribute it freely; the licence will let you do whatever you want with it. With free software, if you make a change, you must make whatever you do freely available to the next person. That’s quite a slight difference, but it’s an important one because, for example, if the BBC was to use some free software and then make a change, which allows it to interact with something that we do, then we have then to provide that back to everybody, including our competition. This is suitable for some software and not for others.
Let’s get to adaptive podcasting and how that came about for you. How did you get involved with this?
When I joined BBC R&D in Manchester, I was having countless discussions with the Head of R&D about audience interaction. I had lots of ideas for how I think people would want to interact with the content they are presented with, but he would always push back, saying that our listeners probably want to sit back and relax. “They probably had a long day; the last thing they want to do is interact and respond” he would say.
It is common to think about explicit interaction when hearing the term interaction, which means actively reacting to something and performing an action. We’ve never paid much attention to the implicit interaction; your location, time of the day, etc. BBC R&D were conducting some experiments with object-based media technology at that time and I was excited about the possibilities of all of it.
Can you explain a bit of how adaptive podcasting technology works?
I’ll start with Object-based media, which allows the content to change according to the requirements of each audience member. The ‘objects’ refer to the different assets that are used to make a piece of content. These could be large objects: the audio track with narration – or small objects, like the sound of a bird we can hear in the background. By breaking down a piece of media into separate objects, attaching meaning or as we call it metadata to them, we can describe how they can be rearranged, changed, and placed to reflect an individual viewer’s context.
This is what I called it perceptive media because it perceives your actions and then adapts to them. At that time, we did a few experiments with R&D, one of them being a IOT (Internet of Things) device ‘’Perceptive radio’’, a radio with a light sensor, proximity sensor, and a microphone.
As time went on, it became clear that phones or smartphones are getting better and better, so there was no point in building an IoT device. So we created an app – the adaptive podcast app.
I say an adaptive podcast, but what we will create is not necessarily a podcast; it can be a spoken word, or it could be a comedy play. There are limitless possibilities. This project is perfect for a community of practice, and this is why we started collaborating with creative professionals from the Rabbit Holes Collective who are working on using the technology in their projects. We are excited to take these ideas into a workshop setting with and co-design these with young people.
Can you tell us a bit about the Rabbit Holes Collective?
Sure! Rabbit Holes Collective, composed of artists and creative professionals, explores adaptive podcasting to create content and invite people to metaphorically ’fall down a rabbit hole’ to connect more deeply with nature.
It consists of creative professionals and artists James Cook, Dr Penny Hay, Roxana Vilk, Manu Maunganidze, Joseff Harris, Dr Tommaso Jucker, Mitch Turnbull, Kathy Hinde, Katie Dunstan, Livia Filotico, Dr Ellie Chadwick and Olly Langdon. It is a wonderfully diverse group – artists, filmmakers, designers, landscape architects, musicians, actors, educators, philosophers, scientists, engineers, technologists, environmentalists, creative producers and community champions.
The use of BBC R&D technologies supports artists from the Rabbit Holes Collective in new ways of sharing ideas via digital and emerging technologies, creating peer-to-peer connections and networks of learning and support. The open mobile platform will provide a highly responsive environment with easy-to-use tools enabling users to engage with the media and create their adaptive podcasts on an open web platform, providing access from existing android devices, reducing barriers of affordability, giving artists a chance to control their work.
There are fascinating projects that the Collective is working on already. Joseff Harris, sound artist and Dr Tommaso Jucker from the University of Bristol, will be looking at how we can represent the intelligence of underground forest communication networks using adaptive podcasting. Gill Simmons from Brave Bold Drama will be thinking about ways in which we can empower young people to express the hyperlocal history of Hartcliffe and Withywood. Kathy Hinde will be creatively engaging with Bristol’s waterways through underwater sound recordings, bringing attention to the secret river Malago and Bristol’s unique tidal range . Those are just a few examples. There are many more!
One may ask, how is this technology innovative if my phone can already use my location and change some features in my favourite applications based on it?
Your phone can change a bunch of things. The difference in what we’re doing is that anybody can get involved in it. What typically happens is that if you want to do something similar, you have to hire a developer, and they would have to write a custom application just for you. That’s great if you can afford to hire a developer but what we’ve done is that we’ve made a generic application so that anybody with a basic skill set can go and create his custom adaptive podcast, upload it to their site, and then download it to their podcast player.
We’ve done that successfully but because we’re using open source code to do that, the program will always be in beta. It’s meant to be; it will never get finished because it will keep on growing.
Does anybody else on the scene pursue similar projects?
I’m stunned that the likes of Google, Apple and Spotify haven’t done this. But I know why – it doesn’t fit with their business model. Their business model is to sell adverts, and even though that adaptive podcasting could do that very well, we’re using it for creative uses. We build it in a way that it’s decentralised. There are no callbacks to a central server. When you download the application, you download your podcast from the BBC, or wherever it is, you play it. one of that data that’s being used goes anywhere. Because we’re dealing with very personal data, the way we built the codebase means you couldn’t do that. There’s nothing to stop someone else from adding that in the future, though.
What are the plans for distributing it or informing people about this app?
The plan is to open source the codebase and release the application on the Google Play Store. Anyone with an Android phone will be able to install it. It will have several demos and, hopefully, some excellent stuff in there that will showcase the possibilities of the technology. Thanks to a funding bid we won with the European Broadcasting Union, we will also create a simplistic editor that will enable anyone to create their podcasts without the need to know how to code.
We already have had many companies contact us interested in integrating this into their technology and other use cases, which have surprised and delighted us. Our biggest ambition is to put this technology in young people’s hands and see media made by young people for young people. It is also something European public service broadcasts are interested in.
How exciting. Thank you Ian, it was great to speak to you today. I can’t wait to see where the project is going.
Thank you. It was a pleasure.
To learn more about the project
Are you interested in the future of content publishing? Are you a writer, artist, technologist or researcher engaged in finding new ways to tell stories to new audiences? Are you k...
Despite the rise of digital devices and associated media, paper does not seem to be disappearing anytime soon. In fact, most print-based documents, including books, tend to be used...
The ‘digital turn’ brings opportunities and challenges for creative writers. One of the few things we can be sure of is ongoing change. This article is about how to navigate that c...
Back in the late 1990s, I started trying to write a biography of William Hayley (1745-1820), man of letters, amateur doctor and champion of women’s writing, and someone whose priva...
BBC Research and Development is the home of technological research and broadcast innovation at the BBC – behind innovations from Ceefax right up to modern day standards like UHD. O...
Where Is The Bird? is the first augmented reality storybook to promote British Sign Language as a language for deaf and hearing children alike. Combining animated Augmented Reality...