Using Open Rev in the Humanities Classroom

So here’s the thing: everyone decides to start a blog when they’re in grad school. Here’s another thing: finishing a Ph.D. takes up all your time. If you have a blog and managed to keep it up throughout the process of finishing, I tip my hat to you. I found it difficult to keep myself on a regular schedule of eating and sleeping, let alone working on extracurricular writing activities. In any case, I’ve forgiven myself. And now we’ll return to the (not so) regularly scheduled blog.

 

phd-on-desk

Don’t judge me for the super-unorganized cord situation in the background.

The fourth chapter of my Ph.D. thesis (pictured above because it is real and that is still exciting to me) focuses on the design and execution of an undergraduate course, called ‘The Notation of Medieval Song’ (henceforth MU3423), which I co-taught in the Music Department at Royal Holloway with my supervisor in the spring of 2016. The course focused on the notation used to write song in British and Irish sources during the 12th and 13th centuries. While this blog won’t cover the contents of the entire chapter (you’ll have to wait for the article!), it will focus on one specific tool that I used: Open Rev.

Open Rev is an open-source tool for collaborative annotation. It was originally created by a team of Harvard grad students, mostly intended for use by scholars and students in STEM fields. The impetus for its creation was to provide a platform to which users could upload open-access scholarly publications, which could then be publicly annotated, creating a discussion without the constraints of publishing firewalls.

In Mu3423, my students used Open Rev to interact with, analyse, annotate and discuss digital manuscript images. If you’re worried about image copyright, there are a range of privacy settings available on Open Rev. In the case of MU3423, I set up a private group to ensure that only approved users had access to the MS images and student-created content. In future iterations of this type of course, I will definitely attempt to either use non-copyrighted images or gain permission from digital archives (and from my students) so that I can make the course public (many manuscript databases allow scholars to download images for use in teaching as long as credit is given to the source, and no one is making a monetary profit off of the presentation). Given the amount of interest generated just through conversations I’ve had with people about this resource, it would have been great to have been able to publicly show how effective this resource was, and widely share the insightful comments that my students made throughout the course.

When students log into the Open Rev Group home page, they see thumbnails of recent activity and a list of all the documents. Each document has a unique title (I used the library sigla & folio numbers in lieu of song titles, since many songs had concordances in several of the manuscripts examined), but they can also be tagged. Because we were working with multiple documents during each weekly lecture/discussion, I tagged each document by week, so that students could easily access that week’s work without having to remember multiple sigla.

The majority of the assignments I had my students carry out on Open Rev required them to highlight areas of the image, and then write a comment about the contents of the highlighted area (usually a group of music notes, a phrase of text, or both). Then, each week, the students were also asked to write a comment on another student’s annotation. I would also write comments on the students’ annotations, usually to answer questions or point out mistakes and prompt them to explore alternative possibilities for transcription, but without ever divulging my own interpretations of the individual note forms or words. The student comments on their colleagues’ annotations were insightful, and above all, helpful: the resulting conversations led to students coming to their own conclusions, with minimal assistance from course supervisors.

Students could also use the public elements of the platform to see how the interpretation of certain note forms and scribal traits could vary among their peers, and be privy to the various routes taken by their colleagues in order to come to their own conclusions – there are almost always multiple interpretative possibilities that need to be examined before scribal error can be presented as a well-researched hypothesis. Palaeography, in many cases, can be less about identifying an individual grapheme than about determining what forms within a writing system a specific grapheme is not, and the Open Rev tasks helped to shed light on students’ processes of elimination.

The downside of using a platform originally intended for text annotation to engage in image annotation is, of course, file upload size limits. The images had to be converted to PDF, with a maximum file size of 20MB. To put this into perspective, digital MS images are regularly 400MB, even up to 1GB, depending on the file type. However, even after being converted, none of the 20 images used for MU3423 became so pixellated that it inhibited the students’ ability to use the images to carry out close palaeographic work (though I was initially worried about a few).

Using Open Rev allowed me to communicate with my students outside the classroom, while also allowing them to communicate with one another. The conversations that were started within the platform were then continued in the following week’s classroom discussion, which was particularly helpful later in the course as the subject matter became more difficult. This tool allowed the students of MU3423 to work with primary source material in a collaborative environment and also to receive regular feedback on their weekly progress. Because all of the content on Open Rev is based around individual uploads, it can be used for a variety of subjects (not just STEM, as this course shows!). The framework can be easily navigated, which ensures that all students will be able to complete the required tasks, no matter what level of experience they have working with technology. I’ll definitely be using it again for future incarnations of this particular course, and for others as well. Have you used it? I’d love to hear about experiences in other classrooms!

 

Advertisements
Posted in Uncategorized | Leave a comment

Technology in Higher Education

(This was originally written as part of the requirement for Royal Holloway’s Programme in Skills of Teaching to Inspire Learning (inSTIL) course)

This post was inspired by Janine Utell’s ‘Making a Space for the Digital and the Scholarly: The Editor as a Teacher’, 2 April, 2015.

The question of whether technology influences teaching is obsolete. That is to say, there is no question: technology has influenced teaching whether or not teachers consciously make an effort to include technology in their teaching practice. That is not what I mean to discuss under the banner of ‘Technology in Higher Education’. Higher education differs from other teaching forms in its requirement of participants to continue their own work as researchers, publishers and writers, all while maintaining their responsibilities as educators. The ‘Research v. Teaching’ debate has been carried out countless times in a multitude of forums, and I won’t resurrect it here. However, I do want to focus on some problems with active integration of technology into fields of higher ed: specifically publishing, and how teaching with technology needs to go beyond simply using technology.

Janine Utell frames her Hybrid Pedagogy piece within Bloomsbury, specifically in the London of Virginia Woolf’s writing. Her opening passage notes the resonance of literature within the very streets of this city, and its existence both as a place of physical and immediate experience as well as a memory of words and images received and interpreted via another person’s mind’s eye. I am familiar with both Londons, the two finally crossing paths in my delight at finally understanding the Underground-related puns in Neil Gaiman’s Neverwhere when I re-read the book after starting my Ph.D. Utell’s focus on London’s existence as a literary setting is important, because it shows how easily we romanticise elements of literature: not just the contents, but the physical objects themselves. While this piece does not engage with literature directly, I don’t think we can ignore the influence literature has had in determining value on the physical, printed word. Utell’s descriptions of colleagues who feel that a digital journal will be somehow less ‘credible’ is understandable: for people who have spent much of their lives placing monetary value on physical objects, a PDF can be disappointing.

Utell suggests that a concern of editors of digital (albeit scholarly) publications is that they cannot fulfil the role of ‘mentor’ to young scholars; this ‘mentor’ role coming from the ability to offer these young scholars continued survival in their field via publication (as Utell puts it, ‘that all-important file in the conventionally acceptable format for a P & T committee’). In other words, the value of publication is still being evaluated (at least in part) by associating worth with a physical object. This physical value translates to scholarly interaction with open-source material, as well. There is still an underlying, Pandora-esque idea that something behind a paid firewall must be more valuable than something left out in the open, and Utell brings up what Dan Cohen has called ‘the social contract’ between authors and readers: that the value comes from the time required to produce a physical object, beautifully presented and free of error. The immediacy of technological publishing and ability to quickly edit already-published material must mean that less work is required for the initial draft, right? Well, no. Books still contain typos and varying degrees of error; the difference is that the books will continue to possess such errors until a second edition is published, while the digital format can be updated as needed.

This is obviously not meant to be taken as a diatribe against books. Instead, I’m hoping to shed some light on why technology remains a contentious subject in higher education. Teaching with technology only becomes successful if the pedagogy develops alongside the resources. In the case of editorial work, Utell suggests that one way to counteract the de-valuing of digital publication is to rethink the process of peer review. She acknowledges the pedagogy behind modern reconstructions of the peer review process (championed by the editorial staff at Hybrid Pedagogy) as being akin to a collaboration; a conversation between a writer and editor with its roots in the Socratic method, allowing both writer and editor to learn and develop from the process.

It is in this description of editorial pedagogy that I believe the most important aspect of ‘Technology in Higher Education’ is found. If we want to include technology in higher education, we must not only think of it as a tool for presenting information or a way to elicit interest from bored students; we must allow our classrooms to spill over into new platforms. If we want students to place value on digital content, their own work included, we must prove as teachers that we understand that content’s worth.

Bibliography

Cohen, Dan. ‘The Social Contract of Scholarly Publishing’. DanCohen.org (blog). 5 March, 2010, http://www.dancohen.org/2010/03/05/the-social-contract-of-scholarly-publishing/.

Utell, Janine. ‘Making a Space for the Digital and the Scholarly: The Editor as a Teacher’. Hybrid Pedagogy. 2 April, 2015, http://www.hybridpedagogy.com/journal/making-a-space-for-the-digital-and-the-scholarly-the-editor-as-teacher/.

Posted in digital humanities, teaching | Leave a comment

September Conference Review (Part 3: DigiPal)

This is Part 3 of a series of posts informally reviewing three conferences I attended (giving papers at two) in September. For Part 1, click here. For Part 2, click here.

The Noises of Art: Audiovisual Practice in History, Theory and Culture (The School of Art, Aberystwyth University / The Courtauld Institute of Art, London / Aberystwyth Arts Centre) 4-6 September, 2o13.

Cantum pulcriorem invenire: Music in Western Europe, 1150-1350 (University of Southampton) 9-11 September, 2013.

DigiPal Symposium III (King’s College, London) 16 September, 2013.

Palaeography, along with codicology, is invaluable to close work with manuscripts, no matter the contents. The study of handwriting can tell us so much not only about the content of a manuscript, but also about the scribe (or scribes) who wrote it. Historians (in any field, including music) can use palaeography to study the writing practices of a specific time period or geographical area, or similarly to help determine the date a MS was created and where (or if certain content should be dated differently than the rest of the MS). Palaeography also happens to be a large focus of my PhD work, specifically examining handwriting practices of a large number of scribes notating songs in British Sources between 1150 and 1300. Because so much of my research involves research on pedagogy, it’s important to note that I believe that palaeography and the teaching/learning of early notation go hand in hand to an extent, so if students are going to learn early notation, really they need to be learning palaeography as well. Most of the time they are actually learning palaeographical techniques, but they just don’t realize it.

The group of songs I’m working with numbers just over 100 (115, more if you include the different text settings of similar musical content), and this type of large-scale (note: large is a relative term here – I’m aware that 100 doesn’t necessarily constitute ‘large’ in terms of data comparison) comparative palaeography is one of the most interesting ways that technology is being used in the field of manuscript studies. When I attended the DigiPal symposium last year, it was a major inspiration for my PhD work, especially from a palaeographical perspective, so when I saw the call for papers this year, I knew I wanted to submit an abstract. DigiPal is a project based at King’s College, London, focused on the study of medieval handwriting in England between 1000-1100. It’s a really useful resource, even though it’s still currently being developed. In particular I find the Glossary helpful, because that’s an area of the field that is famously finnicky – what one person calls a letterform may be wildly different than another scholar’s terminology.

In terms of digital palaeography, there is some work being done in this field that is beyond me, technologically, and without a trace of irony I can say it blows my mind. As of now it is mostly confined to alphabetic writing systems, and within that it is largely specific to the purpose of transcription. But the interesting thing is the way in which the writing systems are being broken down for the purposes of analysis on multiple levels (Find the DigiPal Symposium III Programme hereSession III: Digital Methods contained a lot of the large-scale/hardcore digital analysis work, especially Lambert Schomaker and Jean-Paul van Oosten’s presentations). Projects all over the world are developing using analysis by word, word in context (Eleanor Anthony also mentioned using context as a tool in her fascinating presentation on recreating damaged MSS by way of probabilistic network approaches), letter, letterform, stroke, or shape. And sometimes the reasons a project gives for specifically breaking down a text (or texts) in a certain way can be indicative of the expected outcome – as the field progresses it will certainly be fascinating to see how the methodology can change the results.

Lambert Schomaker probably said it best, though, at the end of his presentation when he noted to the relief of all the palaeographers in the audience that (and I’m paraphrasing), no matter how advanced technology becomes, nothing can take the place of a scholar’s eye in terms of being able to analyse something in its context.

This was something I noted in my presentation during the first session, Manuscripts and the Digital Age.  My paper was called ‘Musical Perception and Digital Surrogates: On Using E-Resources for Teaching Early Music Notation’, and in it I discussed elements of musical palaeography that can effectively be taught to undergraduate students using electronic resources. Rather than developing a tool specifically for purposes of research or data mining, this type of resource can be used to encourage students to develop this ‘scholar’s eye’, and hopefully complement future digital research work with a generation of ‘traditionally’ trained scholars who also know the benefits of working with digital sources. Digital resources I think can be specifically good for palaeography training (which many institutes have already applied to the study of alphabetic writing systems) because there are a lot of minute details that vary across a wide range of sources, and even within individual sources. It’s incredibly helpful to be able to do close comparative work, and digital images allow for this type of research better than physical ones, especially for comparing forms in separate MSS.

Hearing traditional palaeographic work was helpful, too. Any scholar who works with medieval music sources will also have to work with text (90% of the time, anyway), so it’s nice to have had the opportunity to hear about current palaeographical work in a more ‘traditional’ setting (see the last two papers in Session IV from David Ganz and Tessa Webber, both of which were fantastic). It was certainly an information-packed day, and the organisers (Peter Stokes & Stewart Brookes, both at King’s College, London) should be commended for a really useful and creative project, and a great conference.

Posted in conference, digitization, musicology, notation, palaeography, Ph.D., research | Tagged , , , | Leave a comment

September Conference Review (Part 2: Music in Western Europe, 1150-1350)

This is Part 2 of a series of posts informally reviewing three conferences I attended (giving papers at two) in September. For Part 1, click here. For Part 3, click here

The Noises of Art: Audiovisual Practice in History, Theory and Culture (The School of Art, Aberystwyth University / The Courtauld Institute of Art, London / Aberystwyth Arts Centre) 4-6 September, 2o13.

Cantum pulcriorem invenire: Music in Western Europe, 1150-1350 (University of Southampton) 9-11 September, 2013.

DigiPal Symposium III (King’s College, London) 16 September, 2013.

I have to start this review with an apology. Not only was I only present for 2 of the 3 days of this conference, I was pathetically ill the entire time. So I felt I should include a disclaimer that I was under the influence of cough syrup/cold medication, and I may have missed an interesting point or two while either coughing like a 19th-century consumption patient, or trying not to choke in an attempt to NOT cough like a 19th-c. consumption patient. Anyway, if you read anything about people speaking in mysterious languages or how interesting it was that a prominent scholar sprouted wings in the middle of a talk, it probably isn’t true.

If the Noises of Art conference was a place for diversity, Mark Everist’s Cantum pulcriorem invenire: Music in Western Europe, 1150-1350 (University of Southampton) was at the opposite end of the conference spectrum, focusing on only two centuries of Western music, specifically the conductus. Though attending both required a lot of traveling in a short amount of time, I’m actually quite glad I went to these conferences one after another, as it was interesting to see how they were each inspiring in different ways. An excerpt from the call for papers: ‘The conference seeks to shed light on the issues around the discovery and management of known and newly-discovered source material, the implications of claims of meaning in thirteenth-century music, the use of digital technologies in the study of music of the period, as well as other traditional and innovatory approaches’.

To put it frankly, this was hardcore musicology at its finest. Because I study medieval song, the idea of a conference entirely based around the 13th-c. conductus was not something to be missed (though my source material is slightly different, and typically not included in the Ars antiqua category). It was especially interesting to see it discussed within the context of the organum and motet forms, as it has typically been given less prominent standing among the 13th-c. Ars antiqua forms. After writing a paper for a non-specialist audience (Noises of Art) and navigating the challenges of presenting in-depth research in manageable language, it was refreshing to hear scholars speaking in their native tongues.

The conference was part of a larger research project of the same name, so it was a very practical way for me, a Ph.D. student, to examine the practices of scholars working on large-scale funded projects, and how conferences can be an integral part of this kind of work (most grant applications will include a list of things that a research project will produce, including things like conferences, a monograph, &c.), both as a way to promote and present information from the research project, and investigate what other current scholarship is happening within the field.

Particularly interesting to me was the inclusion in the CPI research project of an online database of conductus repertory, and this is another way in which the conference can be a particularly useful tool. In the afternoon session on the first day there was a forum about the project as a whole, and people were asked what they would like to see included in this database, or what they would find particularly helpful. I found the answers to be interesting – a way to see what some of the major scholars in the field of medieval musicology would like to have included in an online resource. I was pleased to almost immediately hear about the possibility of linking the database to other resources in the future (I believe it was Michael Scott Cuthbert who first spoke about this, though it was mentioned several times during the discussion), because not only does it speak to the ability of e-resources to combine multiple fields of research within a discipline, but it also encourages long-term site maintenance. I’ve currently been attempting a project to catalogue online medieval musicology-related databases, and one of the most interesting things I’ve noticed is just how many are dead. Broken links, outdated information, or just a notice on the homepage, along the lines of: Funding ran out on this project in 2005, so see ya later! I’d love to be able to calculate a median lifespan for a scholarly database in the humanities. I bet the results would be surprising.

In terms of the database as being a useful tool, most scholars seemed excited at the prospect of the catalogue, which will certainly make it easier to access this repertory, even if the music will be edited and in modern notation (the question of whether MS images would be available was initially posed by Elizabeth Eva Leach during the forum). But with all the questions about linking the resource to other databases in the future, perhaps image databases could be included in that category, so that scholars will be able to note whether or not an image of a particular MS is available online, and where. If we’re going to create scholarly databases with an encyclopedic slant, we may as well allow ourselves a Wikipediaic element and link to as much relevant information as we can. I supposed in the end it will depend on the amount of funding a research project has received, and how much of that total amount can be alloted to resource creation. And let’s face it, Arts & Humanities funding is almost never enough to do all that we want to.

I’m not sure what the third day of the conference brought to the table, but if the first two days were any indication, a collection of conference proceedings would be a worthwhile investment. Which causes me to wonder: are the questions following a paper ever included in proceedings? This is a case where I’d love it if they were — the discussions were spirited and comedic in turn, but consistently helpful.

Posted in conference, medieval, musicology, notation, Ph.D., research | Tagged , , , | Leave a comment

September Conference Review (Part 1: Noises of Art)

This is Part 1 of a series of posts informally reviewing three conferences I attended (giving papers at two) in September. For Part 2, click here. For Part 3, click here

The Noises of Art: Audiovisual Practice in History, Theory and Culture (The School of Art, Aberystwyth University / The Courtauld Institute of Art, London / Aberystwyth Arts Centre) 4-6 September, 2o13.

Cantum pulcriorem invenire: Music in Western Europe, 1150-1350 (University of Southampton) 9-11 September, 2013.

DigiPal Symposium III (King’s College, London) 16 September, 2013.

Part 1. The Noises of Art

‘The boundary between visual art and aural modes of creative practice is porous.’ – John Harvey, Professor of Art, eye-ear, School of Art, Institute of Literature, Languages & Creative Arts, Aberystwyth University (excerpt from the mission statement of this conference).

As a research student working on interdisciplinary scholarship, I’m always looking for interesting, non-traditional conferences that might be applicable to my work. My focus on notation and perception does dip its toes in in art theory from time to time, and I thought that the Noises of Art conference might be a good way to develop some thoughts on notation as a meeting-place for art and sound. After submitting my abstract, I was invited to speak, with a paper titled ‘Aura, Perception, and Digital Surrogates: On the Modern Interpretation of Early Sources of Music Notation’. Initially, when I saw the programme, I was a bit concerned – I was one of only a handful of papers dealing with pre-20th century subject matter. The session in which I spoke was called ‘Seeing/Sound’, and the other two papers were ‘Walking the canal tow-paths of Staffordshire, how can the sounds encountered be captured in visual form? What happens when Klee’s Twittering Birds meets Messaien’s Petites Esquisses d’oiseaux?’ (Charlotte Jones, Loughbrough University), and ‘Real-time graphic visualization of multi-track sound: establishing a cross-modal relationship between geometrical form and electronic music’ (Irete Olowe, Queen Mary, University of London). A third virtual presentation was ‘Octophonetics: early audiovisual practice within the spectrum of noise’ (Jan Thoben, Humboldt Universität, Berlin).

My paper dealt with the existence of music solely in its visual form (specifically English song in the 12th and 13th centuries), the difficulties that arise when attempting an interpretation of music without sonic reference, and ways in which scholars and teachers of early music notation can develop and use online resources incorporating digital images of manuscripts to facilitate the teaching and performance of this notation to undergraduate students. This is also a central theme of my PhD research at Royal Holloway, and writing the paper allowed me to explore the theme of the ‘aura’ in relation to the way we perceive, learn, and interact with early notation. I was introduced to the concept of the aura through studies on image culture (including studies of art perception and theories of photographic engagement in modern digital culture¹), and I felt that this audience, with a strong background in visual art, would be a good place to find some constructive criticism for my research.

Seeing the other papers in my session, I was concerned that mine didn’t really fit with the 20th-century (and beyond) focus of the conference. But as Charlotte and Irete gave their respective papers, I became aware of the parallels between our work. They both were attempting a visual representation of existing sound (either current or historical), while I was attempting to encourage sonic reconstruction of existing visual representation of sound. This link was not lost on the audience, and the questions following the papers were stimulating and varied. The digital methodology of my work allowed me to engage with people presenting papers as diverse as ‘Audible architectural models’ (Urs Walter, Berlin Institute of Technology) and ‘Translating a composition: performing the interval II’ (Johanna Hallsten, Loughborough University).

Happily, I did get to hear another medievalist present. Irene Noy and Michaela Zöschg (both PhD candidates at The Courtauld Institute of Art, London) gave a fantastic presentation entitled ‘Listening art historians: a cross-period collage of seeing and hearing’. Noy spoke about sonic developments in art exhibits in Berlin ca. 1960-1980, and the ‘white box’ layout of many galleries, while Zöschg (hidden from view behind a screen) spoke about Clarissan sisters in the fourteenth century and the acousmatic sound they experienced while listening to and engaging in prayer. I apologize for the very simplistic description of a presentation that delved deep into two contrasting cultures while still drawing effective parallels in scholarship, but both scholars presented unique ways in which art historians can engage with sound both in terms of physical experience (through galleries) and the idea of sonic reconstruction when examining monastic life in the fourteenth century. Their presentation went beyond traditional forms of scholarly presentation, incorporating theatrical techniques and sonic disconnect (hiding Zöschg behind the screen) to encourage the audience to think about the perceptive implications of acousmatic sound and the ways in which we engage with sound when we cannot visualize its source.

As a result of this diverse and interesting conference, I came to believe even more strongly in the importance of allowing for a wide definition of the word ‘notation’ and studying how we perceive music’s visual qualities as well as its sound. Many of the artists presenting were working with different methods of creating graphic representations of sound, or allowing naturally-occurring visual patterns to be used as representations of sound (Canadian artist Duncan MacDonald’s use of birch bark to create a player piano scroll, for example), and the wide array of ways that people envisage music proved just how varied perception can be. It was inspiring to speak with so many people working slightly outside the standard curriculum, to hear their passionate opinions about art and education and the interaction of the two, and to become more aware of the importance of connecting with scholars and artists outside our direct scope of research. The experience allowed me to develop a more solid method of speaking about my research to non-specialists without watering down the scholarly content, and I got to meet some really great people.

¹ Latour, Bruno, and Lowe, Adam. ‘The Migration of the Aura, or How to Explore the Original through Its Facsimiles’ in Switching Codes: Thinking Through Digital Technology in the Humanities and Arts, eds. Thomas Bartscherer and Roderick Coover (University of Chicago, 2011). // Murray, Susan. ‘Digital Images, Photo-Sharing, and Our Shifting Notions of Everyday Aesthetics’, Journal of Visual Culture 7 (2008): 147-163.

Posted in conference, interactive music, musicology, notation, Ph.D., research, travel | Tagged , | Leave a comment

Modern Musical Expectation, or, Why Doesn’t This Sound Like The Recording?

This post is in response to a piece entitled ‘What’s Lost When Everything Is Recorded?’ by Quentin Hardy, from the Bits blog at the New York Times. 

The year after I finished college, I hung around town. I took some graduate courses in musicology (because even on my year out I couldn’t not be a student). I worked several jobs ranging from classroom aide to bar waitress, and – true to form for many young musicians – I played in a band. One of my favorite memories from my time playing with this group of people was a Halloween tribute concert we did at a local bar. The annual event featured local groups playing entire sets as famous bands – recent years have included sets ‘by’ Nirvana, Fleetwood Mac, The Pixies, Weezer, and The Velvet Underground, among others. It’s always a fun crowd, and a great excuse for bands to cut loose and be someone else for a night.

This particular year, after much discussion, my band and I decided to devote our stage time to Paul Simon, particularly the Graceland years. We conscripted a handful of friends to sing backup and play auxiliary percussion as well as brass and woodwind instruments. The set wasn’t easy – though we included a few Simon & Garfunkel standards like ‘Cecilia’, we were taking on material like ‘Diamonds on the Soles of Her Shoes’ and ‘Kodachrome’, which are deceptively difficult for all their catchiness. We worked incredibly hard, and had a great time. But during our time together we played plenty of other sets (of our own material) that were the result of hard work and we enjoyed them, too – so why does this one stand out?

Because it was perfect. Everyone played the right note, every time. No one mixed up any lyrics, and the small bar’s sound system was spot on, even with the cacophony of our ragtag wannabe-Ladysmith Black Mambazo friends. The crowd was engaged, singing along with everything, and they adored us all.

Okay, obviously this may not be completely true – people probably messed up all over the place. Words were improvised, chords were reversed, drums were hit in a moment meant to be silent. I’m sure some members of the crowd spent as much time talking amongst themselves as they did listening to us play. But we have no recollection of any of these things, because to this day, I have not seen or heard a recording of this performance. I haven’t even seen a photo of us all onstage as a group. And this is the only time that has ever happened. It wasn’t like we had a huge amount of fame, either — after all, we were just a midwestern band that enjoyed some local success in a college town — but to be any sort of performer in the digital age means you’re going to wind up with some kind of residual media after a gig. Pictures on Facebook, iPhone video posted to YouTube, review in a local music blog, &c. All of which are great for small bands in terms of publicity, but after most shows we played, I remember spending quite a lot of time fixating on these recordings and cringing at the smallest mistakes. 

Admittedly, the recordings were good motivation: if you practice, you won’t have to be embarrassed the next time someone uploads a video of your music (fear is the greatest motivator, after all). But all the same, this era of access, where we can watch something and then immediately watch it again and again…takes some of the gloss away. Rather than allowing ourselves to enjoy the moment we stop engaging and watch footage of something that just happened. And the fear that comes with the knowledge of constant media monitoring disallows for some freedoms of performance – the freedom to make a mistake and smile and get over it, the freedom to improvise something that may not work out, the freedom to change a lyric and not be accused of forgetfulness.

Hardy’s article focuses on human conversation – he wonders what devices like Google Glass will do to the way we speak to one another. He wonders if we will begin to analyse and study recorded speech patterns in terms of, say, what verbal techniques were most persuasive during a successful business meeting. Hardy questions whether this type of analysis will eventually change our daily rhetoric, asking if ‘speaking from the heart could become speaking from the talking points of a computerized recommendation engine’.

Quite frankly? It might. It’s certainly happened with music. Access to recorded sound has completely changed human expectation of how something should sound, especially when attending a live performance. The music being heard in the highest-grossing pop performances today is mostly pre-recorded, allowing for only the slightest chance that the resulting sound will differ from the record. It leads one to wonder what the point is of going to the live show, other than being in the same room with a famous pop star? Might as well get 10,000 people together and put the record on really, really loudly. Throw some choreography and lasers in there and you’ve got yourself a concert.

The ways in which recorded sound has changed audience expectation is not entirely related to the pop music genre. It would be interesting to know how the number of so-called ‘virtuostic’ classical performers has grown, specifically since the advent of recorded sound (The NYT did an interesting piece on ‘virtuoso’ performers in 2011, noting with some wry humor that they’ve become ‘a dime a dozen’). I wonder how much credit technology can take for this. Are we more likely to attempt virtuostic levels of performance because we are more aware of the extant virtuostic talent in the world? How much does recorded sound add to general awareness of the existence of this level of ability? The NYT virtuoso piece noted that since 1954, when Roger Bannister ran the first four-minute mile, ‘runners have knocked nearly 17 seconds off [his] time’. Does this type of self-motivation, musical or otherwise, stem from a mentality that wonders If [s]he did it, why can’t I?

Or, are these virtuosos merely performers who are aware of the immediacy of digital media – performers who believe that they must achieve perfection because the reviews will hit Twitter within seconds, the photos will be up on Facebook within hours, and the recorded video available by morning? When the whole world has access to recorded media, the whole world can critique a performance. Maybe this is why these virtuosos are becoming ‘a dime a dozen’ – because we expect the performance to sound exactly like the recording.

As a performer who certainly does not count herself among the virtuostic, this is why I keep that Halloween show close to my heart. It is my virtuoso moment – a review that is my own, not based on the words of others. It is the one performance that exists entirely in memory, and if I want to believe it was perfect no one can prove otherwise. Maybe they remember it differently, but that’s fine – as of now we aren’t able to upload our memories onto the internet, so any imperfections will just have to remain offline.

Posted in Uncategorized | Leave a comment

Music ‘Notation’ for the Facebook Generation?

http://www.djtechtools.com/2013/08/08/mad-zachs-evil-lurks-soundpack-interactive-ableton-lesson/

The web site/forum/blog/store DJ TechTools is ‘taking music releases to the next level’. And the concept is certainly intriguing, if not entirely unique. Mad Zach’s new release ‘Evil Lurks on a Summer Day’ will, instead of being released through a label, include the Ableton project file, a ‘fully interactive sound pack’, and a tutorial on the use of the grid controllers used to play this song. The concept behind the release is, after buyers download the file and learn the controls, they will then go on to remix the release. There’s even a Soundcloud group where users can submit their remix for potential inclusion in an EP release.

So. What do we think? Is this a modern electronic musician’s answer to music notation? Okay. Hear me out on this one – I know the initial answer to the question of whether this seems akin to any sort of Western classical concept of ‘notation’ is obviously No. There is no system of symbols meant to represent sound – rather, the instruction is in physical motions to be enacted upon a piece of equipment. Perhaps a more apt comparison would be to tablature, where the standardized visual system is effectively instructions for physical action. But music notation could be considered instructions, as well – it’s just that the performer takes a step in the middle to process and translate the received visual data into a learned physical action. After years of playing an instrument the step isn’t even necessary – it’s second nature.

It doesn’t seem likely to catch on. It isn’t practical to have to watch a tutorial for every new song an artist wanted to learn (though many classical artists do choose to expose themselves to repeated listenings of repertoire before attempting performance). It isn’t a mode of self-perpetuation where users could continue to use this same system of learning to play other songs. Rather, it seems that this ‘interactive’ music project is a way for new DJs (users must already own all the equipment and software necessary to play the track — the download is only $4.99 after that) to familiarize themselves with the equipment and the process of creation involved in making this form of electronic music.

It’s certainly an interesting new step for a style of music that has traditionally been self-taught; a process of accidental-on-purpose creation that has given birth to wildly varied art forms. But, coming in the footsteps of Bjork’s recent Biophilia and Lady Gaga’s forthcoming ARTPOP, both ‘interactive’ albums for use with the iPad, perhaps this is the new market for producers of pop music in all its incarnations. Marketing a product to a generation that has grown up around the culture of DIY music-making and a wildly easy means of producing self-created art to the public at large (thanks, Internet!) allows them to be part of it, too! You don’t just listen, you’re an active part of something. It has placed music in a context similar to any sort of large-scale social media use – ability to share with the click of a button.

This is by no means any sort of critique on the accessibility or ease of music production. The long-term effects of the DIY generation remain to be seen. As with any creative output there are pros to a methodology that requires a period of time to wait, ruminate, and edit, but there is an artistic quality in itself to the rapid-fire pace and the real-time feedback from users waiting at their keyboards to gobble it up.

As far as DJTechTools goes, I say kudos to any sort of attempt at furthering musical education and interaction. The tutorial method of teaching may not fit the exact requirements of a notational system, but it certainly achieves a similar result. And if it is a result that encourages people to play music, I don’t see anything wrong with that.

Posted in interactive music, notation | Leave a comment