<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI &#8211; The Writing Platform</title>
	<atom:link href="https://thewritingplatform.com/tag/ai/feed/" rel="self" type="application/rss+xml" />
	<link>https://thewritingplatform.com</link>
	<description>Digital Knowledge for Writers</description>
	<lastBuildDate>Mon, 15 Aug 2022 08:01:58 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>Designing a VR Experience in a Covid-19 World</title>
		<link>https://thewritingplatform.com/2022/08/designing-a-vr-experience-in-a-covid-19-world/</link>
		
		<dc:creator><![CDATA[Amy Spencer]]></dc:creator>
		<pubDate>Mon, 15 Aug 2022 08:00:38 +0000</pubDate>
				<category><![CDATA[Featured]]></category>
		<category><![CDATA[Projects]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[VR]]></category>
		<guid isPermaLink="false">https://thewritingplatform.com/?p=4476</guid>

					<description><![CDATA[<span class="rt-reading-time" style="display: block;"><span class="rt-label rt-prefix">Reading Time: </span> <span class="rt-time">11</span> <span class="rt-label rt-postfix">minutes</span></span> In September 2019, five months before the pandemic, I moved to Toronto to begin a PhD at York University. I had been to the city before, and as I was a new international student  I thought I would make new social connections in school. However, in February 2020 the world transformed into its virtual ‘metaverse’...  <a class="read-more" href="https://thewritingplatform.com/2022/08/designing-a-vr-experience-in-a-covid-19-world/" title="Read Designing a VR Experience in a Covid-19 World">Read more &#187;</a>]]></description>
										<content:encoded><![CDATA[<span class="rt-reading-time" style="display: block;"><span class="rt-label rt-prefix">Reading Time: </span> <span class="rt-time">11</span> <span class="rt-label rt-postfix">minutes</span></span><p>In September 2019, five months before the pandemic, I moved to Toronto to begin a PhD at York University. I had been to the city before, and as I was a new international student  I thought I would make new social connections in school. However, in February 2020 the world transformed into its virtual ‘metaverse’ form, and I realised I would not be able to make friends and deepen any new relationships through Zoom, Teams or Slack. The loneliness hit me hard and even though leaving Canada was an option I was worried that I would not be able to return to finish my studies, so I stayed.</p>
<p>Looking back at the two years through a linear order of events, it seemed to me as though the time had been a vacuum. I could not put events into a linear narrative and I could not remember many things that happened in those two years. As a community as well as individually, we went through a repeated experience of fear, uncertainty, and freedom limitation. Negative emotions, such as boredom and anxiety, influence our long-term memory. Moreover, numerous studies have reported that ‘human cognitive processes are affected by emotions, including attention, learning and memory, and reasoning’ (Chai M. Tyng et al., 2017, p. 1454).</p>
<p>So, I wondered whether one of the post-pandemic consequences could be trauma (and possibly a collective trauma). Research on trauma shows that “the social environment does not have a direct and static impact but is mediated by emotional experience, the way it is lived through, interpreted, and processed on the basis of social, personal, and situational resources (today often termed as potential for resilience)” (Busch &amp; McNamara, p.324). Each person experiences the hardships of life differently and some are more resilient than others. I was stuck, anxious and could not focus so I decided to do the one thing that satisfies me: making things. One of these ‘things’ was an interactive experience dealing with my own pandemic trauma. I decided to experiment on myself and find healing methods through interactive VR. If we take into account that the mind finds tools and technologies in the world in order to expand cognitive space constantly. And, if thinking is feeling and feeling is thinking, our emotions co-expand into this space. If we take creation into the equation and if “making is thinking is feeling” (Gauntlett, 2018), I am persuaded that emerging media are the best form for interactive collaboration between humans and machines.</p>
<p><strong>The Making</strong></p>
<p>The experience was created through the AI Storytelling Project as part of the Immersive Storytelling Lab (ISLab) at York University in Toronto. During my job there as an XR creator, I started to explore how to use interactive and immersive media; such as mixed, virtual and augmented reality (XR) and co-creation with the machines (Loveless, 2020; Wolozin, Uricchio and Cizek, 2020; Guzman &amp; Lewis, 2020) for post-pandemic experiences. Specifically I focused on Natural language processing (NLP), a subdivision of artificial intelligence often used for different aspects of human-machine communication (e.g. speech recognition, text generation, speech-to-text and text-to-speech transformation, etc). Thanks to the cooperation between ISLaband the NLPsoftware-based storytelling platform Charisma.AI, I was given access to its Beta version, and I started exploring different options for an immersive and conversational experience.</p>
<p>My methodology during the creation of the virtual reality piece <em>Home Is the World VR</em> took the form of creation-as-research, where “creation is required in order for research to emerge” (<a href="https://www.zotero.org/google-docs/?9XATyJ">Chapman &amp; Sawchuk, 2012, p. 19</a>). Via this VR piece, I investigated the relationship between technology, creation and the human condition, or as Guzman and Lewis explain, “how people understand AI in relation to themselves and themselves in relation to AI” (2020, p. 77).</p>
<p>I followed in the footsteps of the rich tradition of human-machine interaction, elaborated by Sherry Turkle when the personal computer was being adopted into everyday spaces. She described it as a “metaphysical machine,” a concept which led to the study of AI as a challenge to existing conceptualizations of the nature of humans” (Guzman &amp; Lewis, 2020, p. 80). Emerging technologies may eventually push past the boundaries of human communication, meaning that human-AI communication may change how we communicate with each other as humans and with other entities.</p>
<p>Before I start describing the process of designing <em>Home Is the World VR</em>, I want to give a short synopsis to introduce the technology and the main idea of the piece. The VR experience is aimed at headsets with a passthrough option (i.e. Oculus Quest, HP Omnicept Reverb 2 etc.) &#8211;  a feature that allows users to step outside their view in VR to see a real-time view of their surroundings. Passthrough uses the sensors on the headset to approximate what one would see if they were able to look directly through the front of their headset. Hence, the interface is created by an ephemeral space between virtual reality and the user&#8217;s physical environment where they converse with the AI character and are asked to answer its questions and to follow a series of sensory-led tasks to induce pleasant memories before the Covid-19 pandemic. Such tasks include smelling coffee beans, conjuring the smell of an early summer morning, the sound of walking on snow, the taste of a delicious dessert, the touch of a plant etc. Then, the Charisma.AI software records a user’s spoken memories of the pandemic and generates a personal pandemic story for each user, individually. The twist: it re-tells the experience somewhat differently. It re-contextualizes the user’s story in a world full of positive news.</p>
<div id="attachment_4477" style="width: 565px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-4477" decoding="async" class="wp-image-4477" src="https://thewritingplatform.com/wp-content/uploads/2022/08/image001-600x223.jpg" alt="" width="555" height="206" srcset="https://thewritingplatform.com/wp-content/uploads/2022/08/image001-600x223.jpg 600w, https://thewritingplatform.com/wp-content/uploads/2022/08/image001-800x297.jpg 800w, https://thewritingplatform.com/wp-content/uploads/2022/08/image001-400x149.jpg 400w, https://thewritingplatform.com/wp-content/uploads/2022/08/image001-768x285.jpg 768w, https://thewritingplatform.com/wp-content/uploads/2022/08/image001-300x111.jpg 300w, https://thewritingplatform.com/wp-content/uploads/2022/08/image001.jpg 1430w" sizes="(max-width: 555px) 100vw, 555px" /><p id="caption-attachment-4477" class="wp-caption-text">Project workflow</p></div>
<p><em> </em>Through Charisma.AI’s dialogue engine, and Google Cloud NLP services, players and audiences can meet with virtual characters, converse with them, and change the story. The AI character is created through the Charisma.AI platform and Google Cloud Speech-to-Text service. Charisma.AI uses the language of storytelling, with built-in features such as emotion, memory, scenes and subplots (Charisma, 2017 &#8211; 2021). However, it is not GPT3 based and works with a machine learning dialogue engine, which I found exciting when designing a semi-scripted interactive piece.</p>
<p>My research and ideation of the project started with the following questions. What if we could go back in time and re-shape our memories through an interactive AI storytelling experience? What happens when a human and AI re-create memories together? What if the experience played with the idea of the flexibility of memory and its ability of re-constitution? The connecting points in these questions were the ability of language to be performative, making sense of the world by creating stories, putting events and experiences into linear narratives, and the flexibility of human mind. And it seemed as though Charisma.AI, as a storytelling platform using such NLP modes as intents and sentiment, could not only be used as a tool to make stories but also as the content itself.</p>
<p>The saying ‘think before you speak’ has meaning in neuroscience &#8211; the formation of words acts as a delaying function, giving the brain time to deal with information input and retrieval. Traumatic experiences leave such an emotional mark on our brains that, when recalling certain events, we tend to be overcome by emotions and are not able to express what is happening to us through language. It is not by accident that trauma is often dealt with by speech therapy and storytelling. Remembering traumatic events through language and storytelling helps us to deal with  them.</p>
<p>Interestingly, there is a connection to sociolinguistics here, specifically to  J.L. Austin’s theory of performative speech acts. In his book <em>How To Do Things with Words, </em>Austin establishes performative utterances, which not only describe a given reality, but also change the social reality they are describing (1962). The theory was later elaborated by Judith Butler in her book <em>Gender Trouble </em>(first published in 1990), where she introduces gender as a discursive and performative practice.</p>
<p>According to van der Kolk, “Traumatic experiences are exceptional because these intensely emotional events are not encoded into the ongoing narrative states” (2014). The traumatic experience is recorded as separate and dissociated from other life events, and, thus, it takes on a timeless and alien quality. In healing trauma, language is crucial.</p>
<p>John J. Ratey claims that when a subject tries to recall a traumatic experience they are overcome with emotion and are not able to express it in words. They are ‘dumb struck’ for a variety of reasons, one of them being that an important part of our brain responsible for emotion amygdala ‘overreacts’ while another part responsible for language and speech, the so called “Broca’s area shuts down” (2001, p. 210).</p>
<p>Further, Ratey claims that “the formation and recall of a memory is dependent on the environment, mood and gestalt at the time the memory is formed or retrieved” (2001, p. 208). Each memory is created from a vast interconnected network of pieces in our brain such as language, emotions, beliefs. And our daily experiences alter these connections and, therefore, we remember things differently in different phases of our lives.</p>
<p>Apart from neuroscience and linguistics, the theoretical approach framing the project is affect theory, which helps to disclose the ways technology intersects with our limited proximal senses, rhythm and sense of motion and embodiment. When affect is processed through cognition &#8211; once the signal from the amygdala (limbic system) reaches the prefrontal cortex in our brains &#8211; it becomes an emotion. It occurs through cognitive action and relations between agents (humans, non-humans, things, environment).</p>
<p>These cognitive actions are inherently performative, such as language, bodily and facial gestures, or tone of voice. Sara Ahmed (2014) calls these relations “contact”. She argues that already when we feel that something is good or bad, it involves “reading the contact we have with objects in a certain way” (Ahmed, 2014, p. 6). Contact involves a process of reading, attribution of significance, it “involves also the histories that come before the subject” (Ahmed, 2014, p. 6). Emotions are, thus, culturally and linguistically constructed performative actualizations of affect. Ahmed posits emotion as cultural construct through language (2014) as when we name and perform our emotions they become solidified and transmitted to others.</p>
<p>This process of emotion transmission can be also called “affect contagion” (a perfect example is sentiment contagion in social media through text). Plus, according to Brian Massumi, emotions are culturally and linguistically situated, “emotions are embedded in the arbitrariness of language and gestural code (including face) involving cognition, and through which we assign these qualities, and which carry meanings in order to be communicated” (1995, p. 89).</p>
<p><strong>Stage Two: The Design</strong></p>
<div id="attachment_4478" style="width: 478px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-4478" decoding="async" loading="lazy" class="wp-image-4478 size-full" src="https://thewritingplatform.com/wp-content/uploads/2022/08/image002.jpg" alt="" width="468" height="265" srcset="https://thewritingplatform.com/wp-content/uploads/2022/08/image002.jpg 468w, https://thewritingplatform.com/wp-content/uploads/2022/08/image002-400x226.jpg 400w, https://thewritingplatform.com/wp-content/uploads/2022/08/image002-300x170.jpg 300w" sizes="(max-width: 468px) 100vw, 468px" /><p id="caption-attachment-4478" class="wp-caption-text">Scripting in CharismaAI</p></div>
<p>The main aim of the piece is to trigger positive emotions via sensual memory associations and language. When reframing the memories through emotions, I applied methods in trauma research, using questions and sensory led tasks that connect past, present and future through emotion and affect (Damasio, 2005; Van der Kolk, 2014; Busch &amp; McNamara, 2020; Bloomaert et al., 2007).</p>
<p>Upon entering the virtual world, users encounter an omnipresent AI character, which converses with them and gives them a series of sensory-led tasks to induce their memories and emotions associated with the pre-pandemic world and apply them in the ‘now’. First, the user is asked by the AI character to remember a situation during the Covid-19 pandemic. Second, they are prompted to answer questions such as: what sign are you? Where did you spend your pandemic time? Were there other people? In what month did the significant event happen? After the AI character collects all necessary information, the software generates a positive event, which happened in the same month in 2020 or 2021. These generated pieces of text come from a corpus of news articles about positive events during the pandemic. Here are some examples:</p>
<p>“In March 2020, we learned that a group of dogs trained to protect rhinos from poachers have saved 45 rhinos in South Africa. The dogs, including beagles and bloodhounds, among other breeds, were trained from birth to track down poachers alongside humans in Greater Kruger National Park.”</p>
<p>“June 2020: The Supreme Court rules that no one can be fired for being gay or transgender, and Beyonce releases Black Parade. Yaas, Queen B!”</p>
<div id="attachment_4479" style="width: 395px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-4479" decoding="async" loading="lazy" class="wp-image-4479 size-full" src="https://thewritingplatform.com/wp-content/uploads/2022/08/image003.jpg" alt="" width="385" height="310" srcset="https://thewritingplatform.com/wp-content/uploads/2022/08/image003.jpg 385w, https://thewritingplatform.com/wp-content/uploads/2022/08/image003-373x300.jpg 373w, https://thewritingplatform.com/wp-content/uploads/2022/08/image003-300x242.jpg 300w" sizes="(max-width: 385px) 100vw, 385px" /><p id="caption-attachment-4479" class="wp-caption-text">An example from my Miro board of the corpus of  positive news events in 2020</p></div>
<p>Apart from the language, different senses, such as smell, touch, sound, vision and taste, are directly connected to our emotions (e.g. the sense of smell has direct relation to our limbic system, which is why it triggers memories and emotions without any cognitive interference). Therefore, in the second part of the experience, the user is asked to follow sensory-led tasks to deepen the positive emotions. The next image presents different sensory tasks which were brainstormed during a workshop with several participants.</p>
<div id="attachment_4479" style="width: 395px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-4479" decoding="async" loading="lazy" class="wp-image-4479 size-full" src="https://thewritingplatform.com/wp-content/uploads/2022/08/image003.jpg" alt="" width="385" height="310" srcset="https://thewritingplatform.com/wp-content/uploads/2022/08/image003.jpg 385w, https://thewritingplatform.com/wp-content/uploads/2022/08/image003-373x300.jpg 373w, https://thewritingplatform.com/wp-content/uploads/2022/08/image003-300x242.jpg 300w" sizes="(max-width: 385px) 100vw, 385px" /><p id="caption-attachment-4479" class="wp-caption-text">Things to feel with</p></div>
<p>&nbsp;</p>
<div id="attachment_4481" style="width: 478px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-4481" decoding="async" loading="lazy" class="wp-image-4481 size-full" src="https://thewritingplatform.com/wp-content/uploads/2022/08/image005.jpg" alt="" width="468" height="265" srcset="https://thewritingplatform.com/wp-content/uploads/2022/08/image005.jpg 468w, https://thewritingplatform.com/wp-content/uploads/2022/08/image005-400x226.jpg 400w, https://thewritingplatform.com/wp-content/uploads/2022/08/image005-300x170.jpg 300w" sizes="(max-width: 468px) 100vw, 468px" /><p id="caption-attachment-4481" class="wp-caption-text">Blueprinting in UE4 with CharismaAI plugin</p></div>
<p style="text-align: left;">In the finale, the AI character summarises the collected information about the user. However, it replaces the negative event with a positive one. And it asks: do you feel better?</p>
<div id="attachment_4482" style="width: 478px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-4482" decoding="async" loading="lazy" class="wp-image-4482 size-full" src="https://thewritingplatform.com/wp-content/uploads/2022/08/image006.jpg" alt="" width="468" height="210" srcset="https://thewritingplatform.com/wp-content/uploads/2022/08/image006.jpg 468w, https://thewritingplatform.com/wp-content/uploads/2022/08/image006-400x179.jpg 400w, https://thewritingplatform.com/wp-content/uploads/2022/08/image006-300x135.jpg 300w" sizes="(max-width: 468px) 100vw, 468px" /><p id="caption-attachment-4482" class="wp-caption-text">Prototyping</p></div>
<p><strong>Stage Three: User Testing</strong></p>
<p>User testing is crucial when making interactive and immersive experiences (e.g. for improving the user design). Moreover, it is vital to check that the VR experience doesn’t cause any physical or mental issues in the participants. When I tested the prototype in our lab, some participants felt as though it was a ‘confessional’ experience. They saw it as a safe space where they could talk to a scripted AI character who tried to make them feel better about the pandemic by taking them to other emotional landscapes.</p>
<p>On the other hand, when I did an embodied proof of concept exercise with participants at a real-life conference this year, some told me they did not want to go back to the pandemic years. Does this prove or denounce the healing effect of the experience, and the traumatic element of the pandemic?</p>
<p>Busch and McNamara claim that:</p>
<p>“Avoidance of painful intrusions as a measure of self-protection is only one of the reasons that makes it difficult to share a traumatic experience with others. There are many other reasons such as: speaking about what happened can be subject to interdiction, social taboo, or shame; a common ground of experience or knowledge is missing; one does not want to burden others with one’s own pain; the violation and the suffering have not yet been socially acknowledged” (2020, p. 329).</p>
<p>Whereas one user may go through an intimate immersive and interactive experience with an AI character, which not only lends them their ear but also gives a social acknowledgment of the pandemic as traumatic. For another user, it may be a reiteration of the painful memories which they are not ready to share.</p>
<p><strong>Conclusion</strong></p>
<p>A new research question needs to be proposed: When does a VR experience from the first-person perspective heal and when does it re-traumatize? I have experienced far too many re-traumatizing VR pieces in the past (e.g., being sexually, physically, and verbally assaulted in VR can re-traumatize a user if they experienced a similar situation in real life, and if they have no agency or control over the situation in VR). My VR piece aims to stand in opposition to such XR works. It re-iterates the negative memories into positive pandemic stories. And I hope that it can transform the negative emotions into positive ones through language, conversation, and sensory triggers. The iterative process of creation-as-research doesn’t end with a final build of an experience. My work on this project is to be continued.</p>
<p><strong>References</strong></p>
<p>Ahmed, S. (2014). <em>The Cultural Politics of Emotion</em>. Edinburgh University Press.</p>
<p>Austin, J.L. (1975). <em>How to Do Things with Words</em>. Clarendon Press.</p>
<p>Blommaert, J., M. Bock, and K. McCormick. (2007). ‘Narrative inequality in the TRC hearings’<br />
in C. Anthonissen and J. Blommaert (eds<em>): Discourse and Human Rights Violations</em>, pp. 33–63.<br />
John Benjamins Publishing.</p>
<p>Busch, B., &amp; McNamara, T. (2020). Language and Trauma: An Introduction. <em>Applied Linguistics</em>, <em>41</em>(3), 323–333. <a href="https://doi.org/10.1093/applin/amaa002">https://doi.org/10.1093/applin/amaa002</a></p>
<p>Butler, J. (2006). <em>Gender Trouble. </em>Routledge Classics. New York: Routledge.<em> </em></p>
<p>Chapman, O. B., &amp; Sawchuk, K. (2012). Research-Creation: Intervention, Analysis and “Family Resemblances.” <em>Canadian Journal of Communication</em>, <em>37</em>(1), Article 1.</p>
<p><em>Charisma — Storytelling powered by artificial intelligence</em> (no date). Available at: <a href="https://charisma.ai/">https://charisma.ai/</a> (Accessed: 15 December 2021).</p>
<p>Damasio, A. (2005). <em>Descartes’ Error: Emotion, Reason, and the Human Brain</em>. Penguin.</p>
<p>Gauntlett, D. (2018). <em>Making is Connecting</em> (Second). Polity Press.</p>
<p>Guzman, A. L., &amp; Lewis, S. C. (2020). Artificial intelligence and communication: A Human–Machine Communication research agenda. <em>New Media &amp; Society</em>, <em>22</em>(1), 70–86.</p>
<p>Loveless, N. (2019). <em>How to Make Art at the End of the World</em>. Duke University Press.</p>
<p>Massumi, B. (1995). The Autonomy of Affect. <em>Cultural Critique</em>, <em>31</em>, 83–109.</p>
<p>Milgram, P. and Kishino, F. (1994). ‘A Taxonomy of Mixed Reality Visual Displays’, <em>IEICE Trans. Information Systems</em>, E77-D, no. 12, pp. 1321–1329.</p>
<p><em>OpenAI</em> (no date) <em>OpenAI</em>. Available at: <a href="https://openai.com/">https://openai.com/</a> (Accessed: 15 December 2021).</p>
<p>Ratey, John J. (2001). <em>A User&#8217;s Guide to the Brain. </em>New York: Random House.</p>
<p>Tyng, C.M. <em>et al.</em> (2017). ‘The Influences of Emotion on Learning and Memory’, <em>Frontiers in Psychology</em>, 8, p. 1454. doi:<a href="https://doi.org/10.3389/fpsyg.2017.01454">10.3389/fpsyg.2017.01454</a>.</p>
<p>Van der Kolk, Bessel (2014). <em>The Body Keeps the Score. </em>Viking.</p>
<p>Wolozin S., Uricchio W., Cizek K. (2020). <em>Collective Wisdom. </em>Massachusetts: MIT Press.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Opportunity: AI Story Lab</title>
		<link>https://thewritingplatform.com/2020/03/opportunity-ai-story-lab/</link>
		
		<dc:creator><![CDATA[Amy Spencer]]></dc:creator>
		<pubDate>Tue, 03 Mar 2020 13:47:34 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">http://thewritingplatform.com/?p=4114</guid>

					<description><![CDATA[<span class="rt-reading-time" style="display: block;"><span class="rt-label rt-prefix">Reading Time: </span> <span class="rt-time">2</span> <span class="rt-label rt-postfix">minutes</span></span> From talking to Alexa, to using virtual reality and tech in immersive theatre, to the way we interact with characters in games, to innovations in TV narrative such as Black Mirror’s Bandersnatch, a revolution in creating stories and characters is underway. This revolution is driven by AI (Artificial Intelligence). Innovation in machine learning will see...  <a class="read-more" href="https://thewritingplatform.com/2020/03/opportunity-ai-story-lab/" title="Read Opportunity: AI Story Lab">Read more &#187;</a>]]></description>
										<content:encoded><![CDATA[<span class="rt-reading-time" style="display: block;"><span class="rt-label rt-prefix">Reading Time: </span> <span class="rt-time">2</span> <span class="rt-label rt-postfix">minutes</span></span><p><span style="font-weight: 400;">From talking to Alexa, to using virtual reality and tech in immersive theatre, to the way we interact with characters in games, to innovations in TV narrative such as Black Mirror’s </span><i><span style="font-weight: 400;">Bandersnatch</span></i><span style="font-weight: 400;">, a revolution in creating stories and characters is underway. This revolution is driven by AI (Artificial Intelligence). Innovation in machine learning will see stories emerge where the audience becomes part of the story and can interact with characters who have their own voices, emotions and memories, and who make their own decisions.</span></p>
<p><span style="font-weight: 400;">That’s why four innovative organisations in the UK are joining together on a new and exciting opportunity to offer writers in the region a chance to work in the cutting-edge, rapidly evolving world of digital writing driven by AI. </span><a href="https://www.bathspa.ac.uk/research-and-enterprise/research-centres/centre-for-cultural-and-creative-industries/"><span style="font-weight: 400;">The Centre for Creative and Cultural Industries</span></a><span style="font-weight: 400;">, </span><a href="https://bristolbathcreative.org/"><span style="font-weight: 400;">Bristol+Bath Creative R+D</span></a><span style="font-weight: 400;">, creative writing incubator </span><a href="http://papernations.org/"><i><span style="font-weight: 400;">Paper Nations</span></i></a><span style="font-weight: 400;">, and Oxford-based cutting-edge AI company </span><a href="https://www.toplayfor.com/"><i><span style="font-weight: 400;">To Play For</span></i></a><span style="font-weight: 400;">, are offering up to ten fully-funded places to writers under-represented in the publishing and gaming industries in the South West of the UK on a two-day workshop in Bath in May. </span></p>
<p><span style="font-weight: 400;">The workshop focuses on using AI to create characters and develop stories. Participants will look at how to use AI to revolutionise the way we interact with characters in forms, such as games, movies, apps, in mobile, VR online and more. The workshop will provide a masterclass on </span><i><span style="font-weight: 400;">To Play For</span></i><span style="font-weight: 400;">’s </span><a href="https://charisma.ai/"><span style="font-weight: 400;">Charisma.ai</span></a><span style="font-weight: 400;"> platform, giving hands-on experience to the writers of how to adapt existing stories as well as create new ones. </span><span style="font-weight: 400;">Up to three of the writers may go on to paid, five-week placements with </span><i><span style="font-weight: 400;">To Play For.</span></i></p>
<p><span style="font-weight: 400;">Bambo Soyinka, Paper Nations’ Executive Development Producer, has said: </span></p>
<p><span style="font-weight: 400;">“</span><i><span style="font-weight: 400;">We are collaborating on these workshops to bring interactive writers and the AI storytelling experts together. Like particles colliding, we believe new and amazing things will happen. We want people with experience in theatre, comics or gaming; writers who work in performing and telling stories in a range of ways, who can breathe new life into their characters and stories with AI.”</span></i></p>
<p><a href="http://papernations.org/writing-for-all/call-for-action/ai-story-lab/"><span style="font-weight: 400;">The application process is open now.</span></a></p>
<p><span style="font-weight: 400;">Applications close at midday on Thursday 12th March.</span></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Who owns digital stories?</title>
		<link>https://thewritingplatform.com/2019/08/who-owns-digital-stories/</link>
		
		<dc:creator><![CDATA[Amy Spencer]]></dc:creator>
		<pubDate>Mon, 05 Aug 2019 10:14:28 +0000</pubDate>
				<category><![CDATA[Research]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[mix]]></category>
		<guid isPermaLink="false">http://thewritingplatform.com/?p=3948</guid>

					<description><![CDATA[<span class="rt-reading-time" style="display: block;"><span class="rt-label rt-prefix">Reading Time: </span> <span class="rt-time">6</span> <span class="rt-label rt-postfix">minutes</span></span> This is an abridged version of a keynote speech delivered at the MIX Conference 2019 With the increasing convergence between creative industries and artificial intelligence, there is an emerging misunderstanding of how the tech world sees creativity, and this is important for publishers, authors and the broader creative industries. To frame this, it is important...  <a class="read-more" href="https://thewritingplatform.com/2019/08/who-owns-digital-stories/" title="Read Who owns digital stories?">Read more &#187;</a>]]></description>
										<content:encoded><![CDATA[<span class="rt-reading-time" style="display: block;"><span class="rt-label rt-prefix">Reading Time: </span> <span class="rt-time">6</span> <span class="rt-label rt-postfix">minutes</span></span><p><em>This is an abridged version of a keynote speech delivered at the MIX Conference 2019</em></p>
<p>With the increasing convergence between creative industries and artificial intelligence, there is an emerging misunderstanding of how the tech world sees creativity, and this is important for publishers, authors and the broader creative industries.</p>
<p>To frame this, it is important to understand the basics of AI. In essence, AI is like a child-computer. It can learn, be educated and trained, and just like a growing child, it needs feeding.</p>
<p>AI’s food is data and to train an AI you need to feed it on data. Lots of data. And like a child, it will grow well if you feed it on good food.</p>
<p>Looking for large datasets, many companies ‘scrape’ data from open public websites like Reddit, or download datasets from various bulletin boards and websites.</p>
<p>Some companies try to aim for quality, and turn to published books as sources for their data. In the world of natural language, there is value in the content of existing works.</p>
<p>Perhaps one of the first indicators of this was the legal case between the Writer’s Guild of the United States and Google. In 2004, Google started scanning and offering books as part of its Google Library project, now called Google Books. It hailed this as an enormous democratization project, expanding the reach of human literature and knowledge.</p>
<p>A year later, the Authors Guild sued Google for breach of copyright. The case continued &#8211; and so did Google’s scanning. In 2010, Google estimated that there were 130 million titles in existence, and stated its goal:  to scan all of them.</p>
<p>The concern from the Authors Guild, and of course from publishers, was that by offering copyright titles, Google would damage revenues to authors, publishers and copyright owners.</p>
<p>By October 2015, Google had scanned over 25 million book titles.</p>
<p>In 2016, after the US Supreme Court found in favour of Google, Authors Guild President, Roxana Robinson, summarised: “We’re witnessing a vast redistribution of wealth from the creative sector to the tech sector, not only with books but across the spectrum of the arts.”</p>
<p>Now, imagine I am a producing a film and want to quote a section of a book in the film, and use part of an audio track. As well as enhancing the story within the film, this will grow the value of the film product itself. So the process I take is to contact the copyright holders, do a deal and carry on with my production. Even if I base my film loosely on an existing text, I clear the rights. There are legal, moral and commercial reasons to do this.</p>
<p>In contrast, Google is using scanned copyrighted material to build its current and future products, and as far as we can tell from the settlement with the Writer’s Guild seven years ago, without any reference to authors being compensated for the use of their work.</p>
<p>For example, Google’s Talk to Books project allows users to: “make a statement or ask a question, and the tool finds sentences in books that respond, with no dependence on keyword matching. In a sense you are talking to the books, getting responses which can help you determine if you’re interested in reading them or not.”</p>
<p>Google has been similarly looking at the museum and arts sector. I wrote in a previous article that many museums are placing their images in the public domain as part of an advanced digital strategy to create worldwide creative usage of its content. While a museum’s public service approach is to allow commons access to its work, the 360 StreetView version of that museum that Google photographed has a discrete “copyright Google” at the bottom of the screen – so the virtual tour of the museum belongs to Google.</p>
<p>True, the marketing strength that Google brings to large and small art museums creates a global awareness that could only be dreamed of before digital media.</p>
<p>But look ahead: new developments in virtual reality are all about immersion, and Google is a key player in its development. As this technology improves, the experiential gap between visiting a museum in reality and visiting it virtually will diminish, and with it, the reason for people to visit the physical space.</p>
<p>VR will allow for the emotional connection with art to be recreated from the digitised versions. Once an artefact has been digitised, it does not impact Google’s service if the physical location exists or not.</p>
<p>Remember that Google is a search and advertising company. Its primary profits come from its unparalleled ability to tailor advertising to an individual segment of one – you.  And the advertising it displays is the trade-off that allows you to see its search results, its YouTube videos and its maps for free. As you use its services, it builds up a profile of you, which it sells to advertisers.</p>
<p>While this seems like the most enormous copyright abuse in history – and indeed it could be – it is more broadly symptomatic of a wider disrespect by the technology industry towards the creative industries.</p>
<p>Certainly there has for a while existed a valley of misunderstanding between the tech and creative world, and it is important for the health of both sectors, and humankind itself, that this valley is bridged as soon as possible.</p>
<p>When I joined Penguin Digital in the mid-nineties, our IT manager referred to all the books we published as “data”. You can imagine how this went down with the editorial teams at the time!  Yet with hindsight, his outlook was prophetic.</p>
<p>Later, working in TV, I found the engineers called the programme makers “coloured pencil people”.</p>
<p>This year, in his book The Creativity Code, the Oxford mathematician Marcus du Sautoy proposes that computers can produce art like Rembrandt’s. It seems like art and creative writing have now become a summit for AI programmers to conquer.</p>
<p>This is dangerous.</p>
<p>Firstly, it makes the cultural supposition that Art is somehow a technical problem to be solved. It is not. This perspective belittles the creative industries and will have an impact on funding, perceived quality and indeed on the academic, practical and soulful journey that it takes to become an artist.</p>
<p>If Art is somehow perceived as something a computer can conceive better than humans, and infinitely replicable, what hope do the true artists have?</p>
<p>Du Sautoy cites Christie’s selling an AI-generated Rembrandt-style artwork for $350,000 as legitimization of the value of AI and art.  It categorically is not. Christies recognized a novelty, all within the highly commercialised world of international art auctions.</p>
<p>Secondly, there is no merit to humanity in getting AI to create Art. Indeed it is a waste of AI’s strength, to try to anthropomorphise it into recreating human creativity. It should be used to solve problems that are NOT able to be solved by humans, but whose solutions are profoundly needed by humankind – education, inequality, climate change, population growth, energy efficiency, new politics. These are valid and urgent challenges that AI can solve.</p>
<p>Thirdly, and regardless of any futuristic developments, existing works of art should not be used to create technology projects without recompensing the vast numbers of people who have contributed their works to it – whether knowingly or unknowingly.</p>
<p>Ultimately, and most importantly, we must recognize that as technology companies like Google and Amazon seek to engage audiences with stories, conversations and entertainment, they will be looking to the creative industries as fodder.</p>
<p>While some of the proverbial horses may have bolted due to decisions made in the past, this is not the end.</p>
<p>This is our time to define our own narrative and decide what we future we want for our stories. We need to understand that the major technology companies are not benign, and they think far into the future. Seemingly fun experiments with AI “talking in the voice of a book” have implications that will travel down to whether an author can afford a coffee.</p>
<p>So this speech becomes a call to arms. A call to words.</p>
<p>A call that control of the future of the creative industries should not rest with monolithic search and advertising companies.</p>
<p>So beware geeks bearing gifts. Pause before you partner. Don’t default to letting Google scan your libraries, museums and galleries. Don’t let Alexa listen in on your life, your love and your arguments. Do query your presence on Facebook.</p>
<p>Do understand that words are powerful, but also need protecting.</p>
<p>Do understand that, where advanced technology is involved, there are moral decisions that are being made, and without us at the table, these decisions will only have company objectives at their heart.</p>
<p>Be very sceptical of any company’s “AI Ethics Board”, which is a compromised jury.</p>
<p>As storytellers, we are smart, creative and economically vulnerable. But we have stories and creativity that are valuable.</p>
<p>To recognise this is the first step towards being aware of the value we have to technology monoliths, and why, as we head towards the dawn of a new phase of technology, we need to stop them trying to recreate us inside their machines, and instead work jointly to safeguard the importance that human creativity has in all our futures.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
