Anais do V Workshop de Música Ubı́qua
Proceedings of V Workshop on Ubiquitous Music
2014 - Vitória - ES - Brazil
Das Artes Digitais à Música Ubı́qua
From Digital Arts to Ubiquitous Music
29 de Outubro a 1 de Novembro
October 29th - November 1st
Anais do 5o Workshop de Música Ubíqua
Proceedings of 5th Workshop on Ubiquitous Music
29 de Outubro a 1 de Novembro / 2014
October 29th - November 1st / 2014
Vitória - ES - Brazil
Os artigos publicados nestes Anais foram editorados a partir dos originais finais entregues pelos autores, sem edições ou
correções feitas pelo comitê técnico.
Editoração e arte / Publishing and art
Flávio Luiz Schiavoni
Organização
Apoio
Organização / Organization
Coordenadores / Organizing Committee
Geral / General Chair
• Leandro L. Costalonga, Universidade Federal do Espírito Santo (Nescom)
Artigos / Papers Chair
• Maria Helena de Lima, Universidade Federal do Rio Grande do Sul (CAp)
• Damián Keller, Universidade Federal do Acre (NAP)
Atividades artísticas / Artistic Session Chair
• Marcus Vinícius Marvila das Neves, Universidade Federal do Espírito Santo (Nescom)
Divulgação / Public Liaison Chair
• Flávio Schiavoni, Universidade Federal de São João Del Rei (UFSJ)
Comitê de Programa / Program Committee
• Almerinda da Silva Lopes, UFES, Artes visuai
• Andrew R. Brown, Griffith University
• Damián Keller, Universidade Federal do Acre (NAP)
• Daniel Spikol, Malmö University
• Evandro M. Miletto, IFRS, Porto Alegre
• Flavio Schiavoni, IME – USP
• Georg Essl, University of Michigan
• Ian Oakley, UNIST, South Korea
• José (Tuti) Fornari, UNICAMP
• Joseph Timoney, NUIM
• Juan P. Bello, New York University
• Leandro Costalonga, UFES
• Liliana Mónica Vermes, UFES
• Luciano Vargas Flores, UFRGS (LCM, ENDEEPER)
• Marcelo Johann, UFRGS
• Marcelo Milrad, Linnaeus University
• Marcelo Queiroz, IME - USP
• Marcelo Soares Pimenta, UFRGS (LCM)
• Maria Helena de Lima, UFRGS (CAp)
• MarthaPaz, UFRGS
• Mônica Estrázulas, UFRGS
• Nuno Otero, Linnaeus University
• Patrick McGlynn, NUIM - National University of Ireland, Maynooth
• Reginaldo Braga, UFRGS
• Rodolfo Coelho de Souza, USP-RP
• Rodrigo Cicchelli Velloso, UFRJ
• Rogério Costa, USP (ECA)
• Silvio Ferraz, USP (ECA)
• Victor Lazzarini, NUIM - National University of Ireland, Maynooth
Prologue to the Proceedings of the V Workshop on Ubiquitous Music (V UbiMus)
Damián Keller, Maria Helena de Lima, Flávio Schiavoni (Editors)
Ubiquitous Music Group (Grupo de Música Ubíqua]
October 2014
Over the last five years, the Ubiquitous Music Workshop has grown from an informal gathering on
ongoing projects and ideas, to a full­blown event yielding key references for the area. The initial
reports on programming platforms for mobile environments (Lazzarini et al. 2013], on educational
initiatives based on ubimus research [Lima et al. 2012], on design patterns that shaped recent
developments in musical software classifications [Flores et al. 2014), and on everyday creativity
research in music making [Pinheiro da Silva et al. 2014], they were all initially discussed in ubimus
workshops.
The Fifth Workshop on Ubiquitous Music took place at the Federal University of Espírito Santo
(UFES), from October 31 to November 3. Researchers dealing with digital arts and ubiquitous
music shared proposals, initial results and complete research projects. The V UbiMus featured
contributions from Australia, Ireland, Sweden and Italy. These proceedings feature reports in
Portuguese and English, encompassing full papers, posters and summaries of artistic works. There were six full paper proposals accepted. Andrew Brown presented results from his group's
ongoing research on meaningful engagement. This work has close ties to the dialogical approach to
education that has been championed by Lima et al. (2014). Keller's (2014) paper identifies three
methodological approaches to creativity­centered design: the computational approach, the dialogical
perspective and the ecologically grounded framework. The text analyzes how these three methods
relate to a current definition of the ubiquitous music field and proposes two new theoretical tools to
study design qualities from a creativity­centered perspective: volatility and rivalry. Timoney et al.
(2014) describe the EU Beathealth project as an initiative to create an intelligent technical
architecture capable of delivering embodied, flexible, and efficient rhythmical stimulation adapted
to the individuals’ motor performance and skills for the purpose of enhancing or recovering
movement activity. Lazzarini et al. (2014) focus on the prototyping stage of the design cycle of
ubiquitous music ecosystems. The paper presents three case studies of prototype deployments for
creative musical activities. Farias's et al. (2014) report on their findings regarding the use of the
time tagging metaphor for musical creativity endeavors. The authors developed a new ubiquitous
music prototype and carried out experimental work to investigate the relationships between the
technological support strategies and their creative yield, involving assessments of creative products
produced with Audacity, a mixDroid first generation prototype and a mixDroid second generation
prototype. Villena (2014) aims to observe the potential contributions to musical composition of two
research areas: ubiquitous music and soundscape studies. The paper draws on Brazilian
composers’works, establishing links between the traditional soundscape composition approach and
the latest advances in experimental art.
The remaining presentations consisted of artistic works and posters. Schiavoni’s and Costalonga’s
(2014) text serves as a proposal for computer science research to delve into ubimus issues,
stimulating discussions at the borderland between ubiquitous computing and music practice. On a
complementary vein, Santos et al. (2014) describe the use of ubiquitous computing to promote
learning activities in the context of an elementary school music class. Delgado et al. (2014) present
a first prototype supporting collaborative musical activities using location aware mobile technology
based on Near Field Communication (NFC) and Multi­Agent Systems (MAS). Gobira et al. (2014)
present the development of an experimental interactive installation that uses a 3D motion sensor to
capture body movement of a person balancing on a slackline tape. D’Amato’s (2014) Progressive
Disclosure is a short piece situated in an imaginary landscape where an unknown machine is
progressively disclosed to reveal its inner functions. And McGlynn’s (2014) DroneUnknown aims
to explore the ambiguous interaction space of self­modifying instruments. The performer navigates
through the material, discovering it while it is shared with the audience.
Full­paper reports
Brown et al. (2014) report on their experiences using ubiquitous computing devices to introduce
music­based creative activities into an Australian school. The use of music applications on mobile
computers (iPads) made the proposed activities accessible to students with a limited range of prior
musical background. The activities were designed to be meaningful and contribute toward personal
resilience in the students. Brown and coauthors describe the approach to meeting the objectives of
the study and discuss their results. The paper includes an overview of the ongoing project on music
education including its aims, objectives and utilisation of mobile technologies and software with
generative and networkable capabilities. Two theoretical frameworks inform the research design;
the meaningful engagement matrix and personal resilience. These frameworks inform the activity
planning. The report focuses on the activities undertaken and shares results from questionnaires,
interviews, musical outcomes and observation.
Farias's et al. (2014) paper focuses on mixing as the object of study of creativity­centered
interaction design. The authors applied the time tagging metaphor to develop a new ubiquitous
music prototype and carried out experimental work to investigate the relationships between the
technological support strategies and their creative yield. A musician produced 30 sound mixes using
different tools and similar sound resources in the same location. From that output, three creative
products – each of approximately 3 minutes – were chosen. In the first creative session the sound
editor Audacity was used. The second session was done with the ubiquitous music system mixDroid
1.0 or first generation (1G). The third session involved the use of a new prototype – mixDroid 2.0
or second generation (2G). The time invested on each mix was: 97 minutes with Audacity; 6:30
minutes using mixDroid 1G; and 3:30 minutes using mixDroid 2G. 24 subjects evaluated the three
products through the Creative Product Profile (CrePP­NAP) protocol. Results indicated very similar
profiles for the mixDroid 1G and 2G products. On a scale of ­2 to +2, differences weren't larger
than 17 cents. Scores for the descriptors 'relaxing' and 'pleasant' were 0.96 and 1.42 points higher
for the Audacity­made product, but variations among scores were also high. Originality and
expressiveness were slightly higher for Audacity – 21 and 42 cents respectively. In contrast, the
relevance factor of the mixDroid 2G product was 25 cents higher than the score given to the
Audacity product. This study indicates that the application of the time tagging metaphor boosts the
efficiency of the creative activity, but that boost does not extend to the creativity profile of the
products.
Timoney et al. (2014) describe the EU Beathealth project as an initiative to create an intelligent
technical architecture capable of delivering embodied, flexible, and efficient rhythmical stimulation
adapted to individuals’ motor performance and skills for the purpose of enhancing or recovering
muscle movement. The text explains how the project embodies the principles of Ubiquitious Music
and how it draws on many aspects of this field. The ‘Beathealth’ collaborative research project is
essentially about using the power of ‘beats’ and rhythm in new technology applications to help us to
achieve better health. For those with declining physical health such as Parkinson’s, the project
strives to create tools to assist patients in therapy and quicken their rate of improvement. The
proposed technology uses data gleaned from regular, repeated bodily movement employing real­
time sensors. This drives a synchronous adaptation with the music thereby reinforcing the rhythm of
the activity at a neurological level. Among the expected results, the authors envisage to increase the
harmony of body and mind activity, hopefully boosting the beneficial impacts of physical activity.
Villena (2014) aims to observe the possible contributions to musical composition from two areas of
research: ubiquitous music and the work done in soundscape composition. Although these two areas
present important points in common, especially when considering their theoretical foundations, the
author argues that the first is centered on computing, while the second deals with acoustic ecology.
The conceptual boundaries of the two areas are established through a discussion of artists working
in Brazil encompassing their methodologies, their conceptions and the mutual influence within the
area of music composition.
Keller's (2014) paper identifies three methodological approaches to creativity­centered design: the
computational approach, the dialogical perspective and the ecologically grounded framework. The
author analyzes how these three methods relate to a current definition of the ubiquitous music field.
Social interaction is one of the factors to be accounted for in ubimus experimental studies. Hence,
he proposes the label social resources for the shared knowledge available within a community of
practice. Five aspects of creativity­centered design that have targeted social resources are identified.
Material resources are factors to be considered for the design of ubimus ecosystems, so two new
design qualities are proposed as variables for experimental studies: volatility and rivalry. This
discussion is framed by a split between creative products and creative resources which points to
three observables: material resources, material products and material by­products, including
creative waste. The discussion concludes with a summary of the main arguments of the paper,
pointing to applications of these concepts in experimental design studies.
Lazzarini et al. (2014) focus on the prototyping stage of the design cycle of ubiquitous music
(ubimus) ecosystems. The paper presents three case studies of prototype deployments for creative
musical activities. The first case exemplifies a ubimus system for synchronous musical interaction
using a hybrid Java­JavaScript development platform, mow3s­ecolab. The second case makes use
of the HTML5 Web Audio library to implement a loop­based sequencer. The third prototype ­ an
HTML­controlled sine­wave oscillator ­ provides an example of using the Chromium open­source
sand­boxing technology Portable Native Client (PNaCl) platform for audio programming on the
web. The Csound PNaCl environment provides programming tools for ubiquitous audio
applications that go beyond the HTML5 Web Audio framework. This new approach demanded
porting the Csound language and audio engine to the PNaCl web technology. The limitations and
advantages of the three approaches proposed ­ the hybrid Java­JavaScript environment, the HTML5
audio library and the Csound PNaCl infrastructure ­ are discussed in the context of rapid
prototyping of ubimus ecosystems.
Posters and artistic presentations
Schiavoni and Costalonga (2014) state that ubimus concepts and motivations – as defined by Keller
et al. (2009) – include merging sound sources and musical support technology with environmental
resources. Previous research and efforts from the Ubiquitous Music Group included several
discussion involving Collective Music Creation [Ferraz and Keller 2014], Interaction Aesthetics
[Keller et al. 2014], Creativity­centered Software Design [Lima et al. 2012], pointing to open issues
in current musical practices [Keller et al. 2011] and relevant aspects of musical and extra­musical
dimensions. Beyond the musical and social discussion in ubimus, the authors suggest that computer
scientists can also take part in this research field contributing to the growth of ubimus initiatives.
Thus, the text serves as a proposal for Computer Science research to delve into ubimus issues,
stimulating discussions in the interdisciplinary borderland between ubiquitous computing and
music.
Delgado et al. (2014) developed a prototype based on Near Field Communication (NFC) and Multi­
Agent Systems (MAS). NFC is a wireless technology of short­range­communication that enable
users to connect the physical world with the virtual [Want 2011]. The prototype uses a Multi Agent
System (MAS) framework and is based on previous efforts on mobile collaboration [Gil et al.
2014]. This framework features the instantiation of a distributed system that allows mobile devices
to perform individual tasks through its agents and also to perform common tasks such as
communication among its agents/devices [Wooldridge 2002]. The authors present the first
prototype of a framework that supports collaborative musical activities using location aware mobile
technology. To explore the design potential, they propose a series of workshops with ubimus
practitioners to elicit preferences and requirements. The proposal is part of a long­term project
exploring how mobile technologies can enable the emergence of ubiquitous musical activities. McGlynn (2014) presents an experimental performance tool with random characteristics. Previous
computer music approaches have pointed out a number of problems that arise while designing
'intelligent' digital musical instruments: the performance tools modify their behaviour in response to
user input patterns. This concept may appear to be intelligent and helpful but in practice such
designs can hinder the formation of meaningful performer­instrument relationships. Thus, Cook
(2001) states that "Smart instruments are often not smart". DroneUnknown aims to explore this
ambiguous interaction space by capitalizing on the unpredictability of self­modifying instruments,
rather than trying to restrain it. The program runs on the Oscar multi­touch performance platform
and draws random source material from a bank of samples at initialization time. The player must
navigate through the material during the live performance and discover it while it is shared with the
audience. This leads to a number of possible approaches, ranging from the tentative exploration of
the program states to more aggressive journeys. A discussion of the gestural tools available for
performance of ubiquitous music concludes with a live demonstration of DroneUnknown. D’Amato’s (2014) Progressive Disclosure is a short piece situated in an imaginary landscape where
an unknown machine is progressively disclosed. Long­slow sound objects and impulsive sounds are
merged and overlapped to develop an imaginary panorama. Both synthesized and acoustically
derived sounds are used. The piece proposes a reflection on how to approach object properties or
qualities and their functions. Santos et al. (2014) describe the use of ubiquitous computing in an elementary school music class,
to promote learning of rhythmic concepts. Starting from bodily contacts already known to the
students, such as clapping, the classroom is presented as a collective musical tool. The school
environment is turned into a sound lab where participants interact. This research proposes a
exploratory method. Data collection is conducted through questionnaires, focus groups and direct
observations. Gobira et al. (2014) present an experimental interactive installation that uses 3D motion sensors to
capture body movement to create graphics and sounds. The proposal seeks to merge the fields of
music, video art, technology, gaming and performance as an audiovisual product. The proposed
installation suggests a connection between body and machine. Sounds, designed from various
sources, are triggered by voluntary and involuntary processes while maintaining the balance on a
slackline tape. The limb movements ­ feet, hands, and head ­ are translated to data for a generative
audio source developed in Open Frameworks, Pure Data. The body movement controls the
variations in resonance frequency synthesizing five independent sound waves. Through the
translation of body effort to audiovisual stimuli and the combination of postural control with
external audiovisual stimuli, the authors expect to create an immersive notion of interface control. Summing up, as in previous workshops we see a healthy mix of technical, conceptual and applied
proposals. This edition features a growing presence of artistic projects, providing a chance for a
hands­on experience of ubiquitous music in the making. Interestingly, two major approaches to
creativity are represented [Keller 2013]: the algorithmic view and the ecocognitive creative
practices. It is still a mystery whether these two approaches to musical creativity will furnish viable
alternatives to the predominant acoustic­instrumental paradigm. While gathering experience
through exploratory and participatory methods, the Ubiquitous Music Group may provide a space
for community endeavors that are currently lacking in the compartmentalized disciplinary venues.
Echoing the sounds of what may lie ahead, we are tempted to say, “join us, resistance is futile!”.
References
Brown, A., Stewart, D., Hansen, A. & Stewart, A. (2014). Making meaningful musical experiences
accessible using the iPad. In D. Keller, M. H. Lima & F. Schiavoni (ed.), Proceedings of the V
Workshop on Ubiquitous Music (V UbiMus). Vitória, ES: Ubiquitous Music Group. Retrieved from
http://compmus.ime.usp.br/ubimus2014.
D'Amato, A. (2014). Progressive Disclosure [Ubiquitous Music Artwork]. In D. Keller, M. H. Lima
& F. Schiavoni (ed.), Proceedings of the V Workshop on Ubiquitous Music (V UbiMus). Vitória,
ES: Ubiquitous Music Group. Retrieved from http://compmus.ime.usp.br/ubimus2014.
Farias, F. M., Keller, D., Pinheiro Da Silva, F., Pimenta, M. S., Lazzarini, V., Lima, M. H.,
Costalonga, L. & Johann, M. (2014). Suporte para a criatividade musical cotidiana: mixDroid
segunda geração. In D. Keller, M. H. Lima & F. Schiavoni (ed.), Proceedings of the V Workshop on
Ubiquitous Music (V UbiMus). Vitória, ES: Ubiquitous Music Group. Retrieved from
http://compmus.ime.usp.br/ubimus2014.
Gil de la Iglesia D, et al. (Forthcoming) (2014). A Self­Adaptive Multi­Agent System
Approach for Collaborative Mobile Learning. Under submission process in
Transactions on Learning Technologies.
Gobira, P., Prota, R. & Ítalo Travenzoli (2014). Balance: Um estudo sobre a tradução digital do
corpo em equilíbrio [Ubiquitous Music Artwork]. In D. Keller, M. H. Lima & F. Schiavoni (ed.),
Proceedings of the V Workshop on Ubiquitous Music (V UbiMus). Vitória, ES: Ubiquitous Music
Group. Retrieved from http://compmus.ime.usp.br/ubimus2014.
Keller, D. (2013). A mão na massa da criatividade musical (prólogo) / La mano en la masa de la
creatividad musical (prólogo) / Musical creativity (prologue). In D. Keller, D. Quaranta & R. Sigal
(ed.), Sonic Ideas, Vol. Criatividade Musical / Creatividad Musical. México, DF: CMMAS.
Keller, D. (2014). Characterizing Resources in Ubimus Research: Volatility and Rivalry. In D.
Keller, M. H. Lima & F. Schiavoni (ed.), Proceedings of the V Workshop on Ubiquitous Music (V
UbiMus). Vitória, ES: Ubiquitous Music Group. Retrieved from
http://compmus.ime.usp.br/ubimus2014.
Keller, D., Barros, A. E. B., Farias, F. M., Nascimento, R. V., Pimenta, M. S., Flores, L. V.,
Miletto, E. M., Radanovitsck, E. A. A., Serafini, R. O. & Barraza, J. F. (2009). Ubiquitous music:
concept and background (Música ubíqua: conceito e motivação). In Proceedings of the National
Association of Music Research and Post­Graduation Congress ­ ANPPOM (Anais do Congresso da
Associação Nacional de Pesquisa e Pós­Graduação em Música ­ ANPPOM) (pp. 539­542).
Goiânia, GO: ANPPOM. http://www.anppom.com.br/anais.php. Keller, D., Flores, L. V., Pimenta, M. S., Capasso, A. & Tinajero, P. (2011). Convergent trends
toward ubiquitous music. Journal of New Music Research 40 (3), 265­276. (Doi:
10.1080/09298215.2011.594514.)
Lima, M. H., Brandão, R., Keller, D., Pezzi, R., Pimenta, M., Lazzarini, V., Costalonga, L.,
Depaoli, F. & Kuhn, C. (2014). Música Ubíqua no Colégio de Aplicação da UFRGS e Centro de
Tecnologia Acadêmica e Ciência Cidadã Jr: Transversalidades em pesquisa em ensino [Poster]. In
D. Keller, M. H. Lima & F. Schiavoni (ed.), Proceedings of the V Workshop on Ubiquitous Music
(V UbiMus). Vitória, ES: Ubiquitous Music Group. http://compmus.ime.usp.br/ubimus2014.
Lima, M. H., Keller, D., Pimenta, M. S., Lazzarini, V. & Miletto, E. M. (2012). Creativity­centred
design for ubiquitous musical activities: Two case studies. Journal of Music, Technology and
Education 5 (2), 195­222. (Doi: 10.1386/jmte.5.2.195_1.)
McGlynn, P. (2014). DroneUnknown: An experiment in embracing unpredictability in live
electronic performance [Ubiquitous Music Artwork]. In D. Keller, M. H. Lima & F. Schiavoni
(ed.), Proceedings of the V Workshop on Ubiquitous Music (V UbiMus). Vitória, ES: Ubiquitous
Music Group. Retrieved from http://compmus.ime.usp.br/ubimus2014.
Real­Delgado, Y., de la Iglesia, D. G. & Otero, N. (2014). Exploring the potential of mobile
technology for creating music collaboratively [Poster]. In D. Keller, M. H. Lima & F. Schiavoni
(ed.), Proceedings of the V Workshop on Ubiquitous Music (V UbiMus). Vitória, ES: Ubiquitous
Music Group. Retrieved from http://compmus.ime.usp.br/ubimus2014.
Santos, T., Filippo, D. & Pimentel, M. (2014). Computação Ubíqua e a interação corporal na
aprendizagem de execução rítmica [Poster]. In D. Keller, M. H. Lima & F. Schiavoni (ed.),
Proceedings of the V Workshop on Ubiquitous Music (V UbiMus). Vitória, ES: Ubiquitous Music
Group. Retrieved from http://compmus.ime.usp.br/ubimus2014.
Schiavoni, F. & Costalonga, L. (2014). Ubiquitous computing meets ubiquitous music [Poster]. In
D. Keller, M. H. Lima & F. Schiavoni (ed.), Proceedings of the V Workshop on Ubiquitous Music
(V UbiMus). Vitória, ES: Ubiquitous Music Group. Retrieved from
http://compmus.ime.usp.br/ubimus2014.
Timoney, J., Lazzarini, V., Ward, T., Villing, R., Conway, E. & Czesak, D. (2014). The Beathealth
project: Synchronizing movement and music. In D. Keller, M. H. Lima & F. Schiavoni (ed.),
Proceedings of the V Workshop on Ubiquitous Music (V UbiMus). Vitória, ES: Ubiquitous Music
Group. Retrieved from http://compmus.ime.usp.br/ubimus2014.
Villena, M. (2014). Música ubiqua e paisagens sonoras. Possíveis contribuições. In D. Keller, M. H.
Lima & F. Schiavoni (ed.), Proceedings of the V Workshop on Ubiquitous Music (V UbiMus).
Vitória, ES: Ubiquitous Music Group. Retrieved from http://compmus.ime.usp.br/ubimus2014. Want, R. (2011). Near Field Communication. IEEE Pervasive Computing. vol. 10, n. 3
pp. 4­7, July­September.
Wooldridge M. (2009). An Introduction to MultiAgent Systems (2nd ed.). John Wiley &
Sons. ISBN­13: 978­0470519462
Sumário
Música ubiqua e paisagens sonoras. Possı́veis contribuições . . . . . . . . . . . . . .
1
Exploring the potential of mobile technology for creating music collaboratively . 15
DroneUnknown: An experiment in embracing unpredictability in live electronic
performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Suporte para a Criatividade Musical Cotidiana: Mixdroid Segunda Geração . . . 18
Making meaningful musical experiences accessible using the iPad . . . . . . . . . . 30
Música Ubı́qua no Colégio de Aplicação da UFRGS e Centro de Tecnologia
Acadêmica e Ciência Cidadã Jr: transversalidades em pesquisa em ensino . . . 45
The Beathealth Project: Synchronising Movement and Music . . . . . . . . . . . . . 46
Characterizing Resources in Ubimus Research: Volatility and Rivalry . . . . . . . 57
Prototyping of Ubiquitous Music Ecosystems . . . . . . . . . . . . . . . . . . . . . . . 69
Ubiquitous Computing meets Ubiquitous Music
. . . . . . . . . . . . . . . . . . . . . 81
Progressive Disclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Computação Ubı́qua e a interação corporal na aprendizagem de execução rı́tmica 85
Balance: um estudo sobre a tradução digital do corpo em equilı́brio . . . . . . . . . 87
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Música ubiqua e paisagens sonoras. Possíveis contribuições.
Marcelo Ricardo Villena (UNILA/UFMG)
Resumo
O presente texto visa observar as possíveis contribuições, na composição musical, entre
duas áreas de pesquisa: a música ubiqua e trabalhos realizados sob o conceito de
paisagem sonora. Embora estas duas áreas apresentem importantes pontos em comum,
sobretudo se considerarmos alguns dos fundamentos teóricos empregados, têm focos de
pesquisa diferenciados, o primeiro centrado na computação, o segundo na ecologia
acústica. O artigo aborda numa primeira parte a delimitação conceitual das duas áreas
para posteriormente discutir escritos de autores que atuam no Brasil com o intuito de
analisar suas metodologias, suas concepções e as possíveis contribuições mútuas no
âmbito da composição.
Palavras-chave: composição musical, computação pervasiva, ecologia acústica.
Introdução
A área da composição musical passa por um momento de redefinições. As
metodologias de composição tradicional, ancoradas num processo criativo que dispensa
a priori a colaboração entre compositor e intérprete, na fixação de símbolos num papel
pautado ou no uso exclusivo do palco italiano1 como ambiente performático, parecem
ser diariamente contestadas. Observam-se atualmente propostas em que o compositor
sai do conforto do seu escritório, vivencia com seus sentidos o entorno, testa
metodologias de criação coletiva e emprega tecnologias que prescindem da escrita em
pauta musical. Entre essas diversas formas de criação, questionadoras dos
procedimentos da tradição, podemos mencionar a música ubíqua e a composição a partir
do estudo de paisagens sonoras (soundscape).
Estas dois conceitos englobam interesses que vão além da área composição
musical. As paisagens sonoras são objeto de estudo, por exemplo, da antropologia,
sociologia, urbanismo, ecologia, biologia, engenharia, acústica, artes visuais, educação,
entre outras. De fato, o conceito, criado por Murray Schafer em fins dos anos ’60, foi
concebido como um campo de pesquisa multidisciplinar de uma área maior denominada
ecologia acústica e que tinha por objetivo, em definitiva, zelar pela qualidade dos
ecossistemas no aspecto sonoro. A intenção de Schafer era criar equipes integradas por
profissionais de diversas disciplinas que contribuíssem com seu conhecimento ao
desenvolvimento de pesquisas de caráter coletivo.
1
1
Palco italiano é aquele em que os intérpretes se situam à frente do público, como numa sala de cinema.
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
A música ubíqua, por outro lado, é um conceito derivado da computação
ubíqua, isto é, da área da computação que trata de disposítivos móveis, redes e
ferramentas acessíveis em diferentes objetos e locais, como por exemplo um
smartphone que acessa uma rede de internet num aeroporto. A música ubiqua estuda,
principalmente, as possibilidades que estas ferramentas oferecem à prática musical em
diferentes âmbitos de estudo: composição, educação, práticas interpretativas etc.
Desta maneira, compreendemos que os conceitos abordam questões diferentes:
ora com foco no estudo do meio ambiente, ora na tecnologia. Nosso interesse principal
é ver quais as diferenças existentes entre as duas áreas, para finalmente compreender
como suas pesquisas, que ocorrem de forma simultânea, podem se retroalimentar, sem
cair na tentação de uma área querer se sobrepor à outra.
Paisagem sonora
Antes que nada, devemos esclarecer que o termo paisagem sonora, que pode
ser definido como todas as sonoridades presentes no entorno acessíveis à percepção
humana, talvez seja novo como conceito, mas é um fenômeno que sempre esteve em
discussão.2 Bernie Krause, por exemplo, argui (a partir de experiências de escuta de
geofonia com grupos indígenas) que a música muito provavelmente se originou na
escuta do som ambiental (KRAUSE, 2013, p. 40-41). O homem primitivo, ao perceber
os sons do entorno, teria buscado imitá-los e organizá-los a fim de inseri-los em seus
ritos mítico-mágicos. No entanto, a pesar de podermos observar alusões a fenômenos
sonoros ambientais em quase todos os repertórios humanos de todas os períodos e
regiões do mundo, através de procedimentos miméticos incorporados a um discurso
musical de caráter predominantemente abstrato, o emprego do som ambiental em si,
como material compositivo, ganha papel “solista” a partir das experiências da musique
concrète (mediado pelas tecnologias de gravação e reprodução) e dos trabalhos de John
Cage, em que o som ambiental ingressa inclusive sem mediação tecnológica, como fator
de indeterminação. Por exemplo, na emblemática peça Tacet 4’33”, que consiste
basicamente de um período de silêncio que permite o som ambiental se manifestar sem
interferências.
A questão principal introduzida por Cage, além do fator de indeterminação,
era a possibilidade de compreender o som ambiental como objeto de fruição estética:
“Em qualquer lugar que estejamos, aquilo que ouvimos maioritariamente é ruído.
Quando o ignoramos, ele nos incomoda. Quando o ouvimos, nos parece fascinante.”3
Esta forma de relacionamento com o som do meio ambiente, como objeto “artístico”
pronto, acabado, pode ser observada, no entanto, em outros períodos e autores, por
exemplo: um texto taoísta antigo e o ciclo de peças Presque Rien de Luc Ferrari.
2
Há inúmeras descrições de paisagens sonoras anteriores a Schafer tanto nas artes literárias como na
própria discussão teórica e estética da música.
3
Tradução minha. No original: “Whenever we are, what we hear is mostly noise. When we ignore it, it
disturbs us. When you listen to it, we find it fascinating.” (CAGE, 1973, p. 3).
Vitória - ES - Brazil
2
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Um poema do pensador taoísta Zhuangzi (369?-286? a.C.), apresenta uma
oposição entre “a música dos homens”, compreendido como um fenômeno sonoro
“limitado” e a “música da Terra”, de caráter ilimitado. No fim do texto, o autor
menciona a experiência de um músico célebre (chamado Zhao) que compreende que
quando toca um som com seu instrumento acaba negligenciando todos os outros sons do
mundo. (ROTHENBERG & ULVAEUS, 2001). Já Luc Ferrari, em Presque Rien opta
(em rebeldia às diretrizes de Pierre Schaeffer sobre a musique concrète) gravar o som de
um entorno sonoro em particular e apresentá-lo ao público com o mínimo possível de
alterações decorrentes de edição e processamento.
Poderiamos associar estas propostas em que os sons que o mundo oferece são
apresentados quase sem interferência como “anti-música”? Preferimos argumentar que
não. O que está em jogo, em realidade, é compreender a possibilidade de percebermos o
som ambiental como um evento passível de ser usado como material musical. São
propostas que deixam tácito um convite a se imaginar novas concepções musicais
fundamentadas em um diálogo íntimo e concreto (real) com o som do entorno em sua
manifestação espontânea, em vez de negá-lo através do isolamento em salas
acusticamente fechadas.
A partir das experiências de Cage, Murray Schafer apontou um caminho
diverso, a sua preocupação não era simplesmente estética senão social e ambiental.
Aludindo tacitamente à arte conceitual, a criação do termo implicava imaginar o mundo
como uma grande obra de arte sonora, na qual todo ser humano seria partícipe. Cuidar
dessa “composição coletiva” seria tarefa de diversos segmentos da sociedade, e para tal,
montou uma equipe de pesquisa na qual o compositor contribuiria com sua
“sensibilidade”:
Embora tivéssemos muitos músicos em nossos cursos sobre soundscape, eu
sabia desde o início que nós não estávamos treinando compositores, mas
tentando definir uma nova profissão que ainda não existia e mesmo hoje não
existe na medida do desejável. Eu imaginava um especialista em som que
combinasse habilidades técnicas e preocupações sociais com a sensibilidade
estética de um compositor.4 (R. Murray SCHAFER, 1993).
O trabalho dos compositores da sua equipe na Simon Fraser University, no
entanto, ganhou uma relevância inesperada, criando o que Barry Truax passou a
denominar soundscape composition (TRUAX, 2002), um gênero de composição
eletroacústica guiado por uma intencionalidade referencial, isto é, compor com
materiais que façam alusão a sonoridades ambientais, mas, principalmente, atento às
relações presentes no meio ambiente estudado. Surpreendentemente, quase quatro
décadas após o surgimento deste estilo, há compositores que declaram fazer soundscape
4
No original: “While we had many musicians in our soundscape courses, I knew from the beginning that
we were not training composers but were trying to define a new profession that did not yet exist and even
today does not exist to the extent desirable. I imagined a sound specialist combining technical skills and
social concerns with the aesthetic sensitivity of a composer.”
3
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
composition sem considerar o meio ambiente de forma global, tomando uma sonoridade
isolada do contexto em uma poética mais próxima à música espectral.5
A partir da soundscape composition surgiram diversos trabalhos baseados no
som ambiental e teorias vinculadas: ecomusic, ecocentric music, eco-composition, ecostructuralism, environmental performance works e ecoacoustic. (GUILMURRAY,
2014). O âmbito destes trabalhos, em geral, é a música computacional, variando entre
uma relação íntima entre a percepção e estudo ambiental em si como processo de
composição (ecoacoustic) e o uso de teorias ecológicas no ambiente virtual como
fundamento para a elaboração de modelos compositivos, com menos preocupação,
talvez, em revelar a paisagem em si (eco-composition).
Finalmente, devemos considerar o estudo de paisagens sonoras como
fundamento para composição de música instrumental. Além dos trabalhos do próprio
Schafer (a “ópera situacional”6 The Princess of the Stars),7 há experiências de
compositores do núcleo original da soundscape composition combinando eletrônica e
instrumentos acústicos (Phantasy for Horns, de Hildegard Westerkampf) e trabalhos do
compositor alemão Peter Ablinger, como seu ciclo de Regenstücke, para diversos
instrumentos sólo imitando sons de chuva.8 No Brasil, Ulises Ferretti e Marcelo Villena
transitam também nesse âmbito,9 desenvolvendo metodologias específicas que,
evidentemente, não se relacionam de forma direta com a música ubíqua, mas que
podem, porventura, colher suas contribuições para enriquecer sua poética.
Música ubíqua
Se o termo paisagem sonora se origina na escuta ambiental, inicialmente em
procura de prazer estético e um fator de indeterminação (Cage), definindo
posteriormente seu campo de estudo no âmbito da ecologia acústica (Schafer), para
finalmente ser usado como fundamento estético de um gênero específico de música
5
Os princípios estéticos da soundscape composition, que podem ser encontrados no site da SFU a partir
de conclusões de Truax, são claros neste sentido: 1) Os sons são trabalhados de maneira que o ouvinte
possa reconhecer a origem dos materiais. 2) O conhecimento do ouvinte sobre o ambiente e seu contexto
psicológico é invocado. 3) O conhecimento do compositor sobre o ambiente e seu contexto psicológico
influencia a forma da composição em todos os níveis. 4) O trabalho aumenta nosso entendimento do
mundo e sua influência se estende para nossos hábitos perceptivos do dia-a-dia. (TRUAX, 2002).
Tradução livre nossa sobre o texto colocado no site: <http://www.sfu.ca/~truax/scomp.html>.
6
Composta para ser interpretada num local específico.
7
Informações adicionais podem ser encontradas no site da ópera:
<http://www.patria.org/pdp/CHAOS/POS/PRINCESS.HTM>. (R. Murray. SCHAFER, 2014).
8
Informações adicionais sobre estas peças de Ablinger podem ser obtidas no seu web site:
<http://ablinger.mur.at/>. (ABLINGER). Há outros exemplos que poderíamos citar, mas que não se
vinculam especificamente a um trabalho contínuo na temática ambiental como é o caso destes autores.
9
(FERRETTI, 2006) e (VILLENA, 2013). Ambos autores trabalharam seus portfólios de mestrado
integralmente dentro da ideia da escuta de paisagens sonoras como fundamento para a composição com
instrumentos acústicos. Já no doutorado, Ferretti opta por apresentar alguns trabalhos com instrumentos
acústicos (peças de palco), alguns eletrônicos (instalações) e outros mistos (instalações com
performance). Os links para estes trabalhos estão nas referências bibliográficas.
Vitória - ES - Brazil
4
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
eletroacústica (a soundscape composition), a música ubiqua toma emprestado seu termo
da computação ubiqua, tendo, portanto, um campo de estudo diverso.
Este campo não se reduz, evidentemente, ao estudo exclusivo de ferramentas
tecnológicas, senão que abrange as interações entre pessoas e dispositivos, e entre estes
meios tecnológicos e o meio ambiente. Ao observar a literatura sobre a música ubíqua
podemos facilmente perceber que o foco central das atenções é a capacidade que a
tecnologia tem de gerar novas relações entre os usuários dos dispositivos e a sua
vivência musical, o compartilhamento de dados, ou inclusive à forma em que a
tecnologia possibilita novas formas de percepção ambiental. Porém, neste caso estamos
permanentemente diante de uma percepção mediada, não direta. Em todos os textos
observados torna-se presente a discussão sobre qual dispositivo está sendo usado, qual a
ferramenta. Os estudos de paisagens sonoras, em contrapartida,10 destacam mais a
percepção sensorial, a confluência de informações recebidas pelos diferentes sentidos,
destacando a vivência corporal no meio ambiente. Esta divergência de focos de
pesquisa acabam derivando em trabalhos de caráter diferenciado, seja no âmbito da
composição, nas práticas interpretativas ou na educação musical. Tendo em mente esta
diferença, voltemos a definição de música ubíqua entendedo-a como uma terminologia
derivada da computação. Regina Borges de Araújo nos informa:
A idéia básica da computação ubíqua é que a computação move-se para fora
das estações de trabalho e computadores pessoais e torna-se pervasiva em
nossa vida cotidiana. Marc Weiser, considerado o pai da computação ubíqua,
vislumbrou há uma década atrás que, no futuro, computadores habitariam os
mais triviais objetos: etiquetas de roupas, xícaras de café, interruptores de
luz, canetas, etc, de forma invisível para o usuário. (de ARAUJO, 2003)
O conceito, aplicado à música, abre espaço para a criação de inúmeras
situações inusitadas, como o caso da peça Pandora do compositor Sérgio Freire, em que
uma caixa clara é manipulada à distância por meio de comandos similares aos de um
video-game (o Lightning II, um controlador MIDI). A situação performática é recebida
pelo público como um “truque de mágica”, pelo fato do som ser feito prescindindo do
contato físico entre performer e instrumento.11
A música ubíqua fomenta também o trabalho compositivo colaborativo e/ou à
distância, com os criadores interagindo por meio da rede de computadores, trocando
informações (dados), compartilhando o uso de ferramentas, alterando, em definitiva a
ideia consagrada do compositor solitário que escreve notas em uma pauta para
posteriormente repassar o resultado a um intérprete que não precisa, a princípio, fazer
nenhuma contribuição para a produção desse “texto” musical. Estes dispositivos, por
outro lado, fomentam a inclusão na prática compositiva de pessoas que não conhecem a
escrita convencional, mas que em muitos casos, tem ótimo ouvido e sensibilidade, além
10
Além dos autores mencionados no corpo de texto, sugerimos observar os trabalhos de (BARRIOS &
RODRÍGUEZ, 2005) e (ATIENZA, 2008). Ver: referências bilbiográficas.
11
Para mais informações sobre a obra, consultar o artigo Pandora: uma caixa-clara tocada à distância
(FREIRE, 2007), disponível em: <http://www.musica.ufmg.br/sfreire/Freire-pandora.pdf>.
5
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
de conhecimentos de outras áreas importantes para o desenvolvimento do processo
criativo, como por exemplo acústica e programação de computadores.
Um exemplo desse tipo de investigação é o artigo MDF: Proposta Preliminar
do Modelo Dentro-Fora de Criação Coletiva, em que Ferraz e Keller propõem um
“modelo para se refletir quanto aos processos sociais, pessoais e materiais que ocorrem
durante a criação musical coletiva”(FERRAZ & KELLER, 2012).O objetivo do texto é
elaborar critérios de análise dos coletivos de criação em aspectos materiais, humanos e
procidimentais. Através dos termos in-group e out-group, os autores estabelecem uma
categoria binária para classificar resultados obtidos em pesquisas nesses 3 âmbitos.
No aspecto humano avaliam a capacidade de interação entre indivíduos,
determinada, de certa forma, pela sua bagagem cultural, por suas experiências anteriores
e pelo seu conhecimento musical:
[...] a noção de in-group (para dentro) seria equivalente à força de
aglutinação que homogeniza os campos epistêmicos, e a noção de
out-group corresponderia à força oposta que leva os componentes
do grupo para a divergência e eventualmente à desagregação.
(FERRAZ & KELLER, 2012)
No aspecto material, a dicotomia é estabelecida entre recursos “renováveis”
(aqueles que podem ser utilizados mais de uma vez sem perder a capacidade criativa) e
“não-renováveis” e entre “rivais” e “não-rivais” (este último, quando os recursos
oferecem a possibilidade de compartilhamento entre os usuários). Já no aspecto
procedimental, a dicotomia in-group/out-group é definida pela discriminação entre “lixo
criativo” (aqueles materiais musicais descartados –não aceitos pelo grupo) e “produto”
(o material que é consenso coletivo).
Vemos através do exemplo deste artigo uma das questões problematizadas pela
música ubíqua (pouco explorada por autores que declaram trabalhar a partir do conceito
de soundscape):12 a procura de um corpo teórico sobre os métodos de criação coletiva,
de maneira a estabelecer ferramentas de análise da interação entre grupos, um aporte de
interesse para outras áreas de pesquisa, inclusive para propostas além da música
eletrônica ou computacional.
Teoria Ecológica da percepção em trabalhos compositivos de paisagens sonoras e
música ubíqua
Quando observamos textos de música ubíqua e paisagem sonora salta à vista o
uso comum da Teoria Ecológica da Percepção, como fundamento, sobretudo, para a
criação musical. O ponto inicial desta teoria pode ser encontrado no livro The
Perception of Visual World (GIBSON, 1950) em que James Gibson nos apresenta a
genealogia da Ground Theory (teoria do solo, do chão, da terra): a partir de
experimentos em aviação militar, na segunda guerra mundial, foi constatado que os
12
Há alguns casos de parceria no Brasil, como a do compositor Ulises Ferretti e a artista plástica Claudia
Paim, que embora pertençam a áreas diferentes concebem e compõem juntos as obras tanto no aspecto
sonoro quanto visual. Mas de qualquer maneira não apresentam nenhuma análise minuciosa sobre como
essa criação compartilhada aconteceu.
Vitória - ES - Brazil
6
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
pilotos, mesmo no ar, tinham sempre o solo como referência última para sua orientação.
Gibson, então, partiu de esta constatação experimental, combinando-a a conceitos
próprios da Teoria da Gestaldt para definir esta teoria que considera a percepção, em
última instância, um processo relacional direto entre homem (ou outro ser vivo
qualquer) e meio ambiente. Um meio ambiente que é decodificado e mapeado
(escaneado) a partir das informações colhidas pelos sentidos. Eric Clarke nos apresenta
os fatores essenciais a este processo: 1) a relação entre percepção e ação; 2) a adaptação
e 3) o aprendizado perceptual. (Clarke, 2005).
A relação entre percepção e ação se refere à relação existente entre cada ação
executada por um corpo e uma resposta ambiental, desde a diferença entre tocar um
objeto inanimado e um animal (o último provavelmente se movimentará), ou às
diferentes formas de percepção que podem ser acessadas dependendo da disposição de
certas partes do corpo. A adaptação é entendida como o processo que possibilita
compreender o mundo para questões de sobrevivencia e que está relacionada a
experimentação de estímulos e à memória da resposta ambiental. Já o aprendizado
perceptual implica não só questões da relação direta do corpo com o ambiente, mas
também aspectos culturais. Os processos de representação mental, por exemplo, são
colocados por Clarke (2005, p. 11) nesta última categoria. Mas, evidentemente, estes 3
itens não são compartimentos separados, senão que atuam simultaneamente no processo
de “sintonizar” o corpo com o mundo.13
Sem querer entrar em detalhes sobre esta teoria, algo que ultrapassa,
evidentemente, o objetivo deste breve texto, tentaremos compreender como ela é usada
por alguns compositores residentes no Brasil na atualidade. André Luiz de Gonçalves de
Oliveira, em seu texto Paisagem sonora como obra híbrida (OLIVEIRA, 2011),
procura sua aplicação para problemáticas de espacialização multicanal, compreendendo
que a disposição do público na relação convencional de palco italiano estaria divorciada
da percepção ambiental. Além do que, o modelo de percepção gibsioniano é tomado
como um fundamento estético para interatividade entre espectador e obra. O problema
central para o autor, portanto, é encontrar metodologias de composição que permitam
produzir uma experiência estética similar à do corpo inserido no meio ambiente.
Outro compositor vinculado ao estudo de paisagens sonoras que emprega as
teorias de Gibson e Clarke é Ulises Ferretti. Na sua tese Entornos Sonoros. Sonoridades
e ordenamentos (FERRETTI, 2011) menciona a Teoria Ecológica em vínculo estreito
com sua forma de percepção do meio ambiente ao trabalhar na composição de peças
instrumentais e instalações sonoras:
[...] essas práticas de escuta [baseadas na teoria de Gibson] estão
influenciadas por características do som e pela maneira que se o ouve. Nelas,
influem aspectos como a forma esférica – em todas as direções – de
propagar-se o som, a particularidade do sistema auditivo de captar o som
proveniente tanto de cima, dos lados e de baixo, e a capacidade auditiva de
focar a atenção de maneira diversa. Muitas particularidades que nascem
dessas maneiras de escuta têm sido utilizadas nos processos compositivos
deste trabalho. Várias delas – como as diferenças entre escuta focada,
periférica e outras interações adiante expostas – diferenciam propostas como
13
O interessante da teoria de Gibson é que ela não dá conta só da percepção humana, mas também da
percepção dos animais.
7
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
Duplo Coro [uma instalação sonora] e Canon Tipológico [uma peça
instrumental]. (FERRETTI, 2011)
Ferretti parece procurar nas suas composições a mesma relação entre
experiência ambiental e obra, destacada por Gonçalves, com a diferença, talvez, de dar
maior ênfase à fase inicial do processo, à percepção ambiental como motivação
compositiva, descrevendo em seus textos a recepção dos fenômenos sonoros ambientais
minuciosamente. Parece empregar a teoria de Gibson para melhor “mergulhar” nos
entornos e guardá-los na memória. Na concepção da obra, assimesmo,a inserção do
corpo no ambiente de difusão é uma problemática central, considerando inclusive a
questão das diferenças de perspectiva na escuta do meio ambiente. E é neste ponto que
Ferretti reconhece a utilidade das ferramentas tecnológicas e, por extensão, das
pesquisas em música ubíqua.
Na música ubíqua, por outro lado, os trabalhos compositivos não destacam,
aparentemente, a centralidade da percepção do corpo no ambiente de estudo. Não fazem
esta referência direta à paisagem sonora em si. A Teoria Ecológica é empregada dentro
da ideia de criação de modelos compositivos (modelos ecológicos) através de
ferramentas computacionais. Eric Clarke nos dá indícios para compreender estes
procedimentos quando trata sobre os modelos conexionistas em computação:
O modelo conexionista, que foi amplamente discutido na psicologia e nas
ciências computacionais [...] se diferencia em si mesmo da tradional
Inteligência Artificial [AI – no original em ingles] por afirmar que o processo
perceptual e cognitivo pode ser modelado como a forma [property] de
distribuição da totalidade de um sistema, não de uma parte em particular da
qual processa qualquer “conhecimento” em si, ou do funcionamento de
papéis especificos operando em direcionamentos específicos que contém
representações de conhecimentos delimitados (uma crua caracterização da
AI). Um modelo conexionista típico consiste de uma rede de nós, interligados
por conexões que podem tomar valores variáveis representando sua extensão
(ou peso). Uma camada de unidades de entrada de informação [inputs] é
conectada a uma camada de unidades de saída [outputs], com um número
variável de “camadas ocultas” (usualmente não mais que duas ou três).
Quando as unidades de saída são estimuladas, um padrão de ativação corre
através da rede, dependendo dito padrão da estrutura das conexões e os pesos
assinados a eles, que convergem em direção a um número de unidades de
saída. Convencionalmente, a rede é inicialmente ativada com valores
aleatórios [random] em relação ao peso das conexões, de maneira que a
primeira “ativação” resulta de um comportamento aleatório do sistema como
um todo.14 (Clarke, 2005).
14
No original: Connectionist modeling, which was widely discussed in psychology and computer science
[…] differentiates itself from traditional Artificial Intelligence (AI) by claiming that perceptual and
cognitive processes can be modeled as the distributed property of a whole system, no particular part of
which possesses any “knowledge” at all, rather than as the functioning of explicit rules operating on fixed
storage addresses which contain representations or knowledge stores (a crude characterization of AI). A
connectionist model typically consists of a network of nodes, interlinked with connections that can take
variable values representing their strength (or weight). A layer of input units is connected to a layer of
output units, with a variable number of “hidden layers” (usually no more than about two or three) in
between. When input units are stimulated, a pattern of activation spreads through the network, the pattern
depending on the structure of the connections and the weights assigned to them, and converging on a
number of output units. Typically, the network is initially set up with random values assigned to the
connection weights, so that the first “activation” results in random behavior of the system as a whole.
Vitória - ES - Brazil
8
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
A rede de conexões de um sistema computacional, desta maneira, pode ser
programado para “funcionar” de maneira similar a um ecossistema, observado como
uma estrutura, com suas relações entre partes componentes. Pode-se programar
respostas a determinadas ações ou estimulá-las por uma simples recorrência maior do
estimulo (no caso: informação computacional). Estes tipos de procedimentos
computacionais parecem ser a base para a construção de modelos compositivos
empregados preferencialmente no campo de música ubiqua, sob o termo ecocomposição. Uma pergunta que pode ser feita é em que medida essa forma de proceder
não é uma continuação da línea de pensamento da música estocástica, inclusive se
considerarmos que o modelo pode ser empregado sem fins de referencialidade extramusical. A procura por “modelos” leva em si um grau de abstração que prescinde do
caso particular. Vejamos em outro âmbito: se procuramos um modelo de análise
estrutural de mitos indígenas, por exemplo, estamos tentando dar conta não das
diferenças entre cada mito, senão das generalidades que permitem estabelecer vínculos
entre as diferentes narrativas. Os trabalhos dos “paisagistas”, ao contrário, tendem a
tratar das particularidades locais, como veremos a seguir.
Cartografias sonoras
Em sua dissertação de mestrado, Cartografias sonoras: um estudo sobre a
produção de lugares a partir das práticas sonoras contemporâneas (NAKAHODO,
2014), a artista sonora Lilian Nakahodo procura a construção de um corpo teórico sobre
trabalhos vinculados às paisagens sonoras a partir de uma ótica da geografia e das
ciências sociais. Esclarece desde o início que a proposta se origina numa postura
humanista, focado em aspectos subjetivos da percepção do que denomina lugares.
[...] neste mundo urbano contemporâneo, atravessamos um período de
uniformidade na intermediação dos relacionamentos cotidianos; há que se ter
bandas cada vez mais largas para as conexões, voos cada vez mais
numerosos, shoppings cada vez maiores, mais... mais... em tempos cada vez
menores. Neste contexto, vive-se mais pela tela de um computador, pelos
fones plugados em um estéreo pessoal e enviando mensagens via whatsapp.
Essa realidade marcada pela velocidade e supostos encurtamentos de
distâncias é, aparentemente, um reflexo das transformações da sociedade que
cria esses espaços que se pode denominar como não lugares. (NAKAHODO,
2014).
A postura adotada não deixa lugar a dúvidas. A pesar de ser um trabalho
teórico em procura de subsídios para um trabalho compositivo pessoal permeado
integralmente pelo uso de recursos tecnológicos (gravadores portáteis, GPS, softwares
multitrack e sistemas de espacialização) a autora apresenta um discurso crítico, em que
o uso indiscriminado destes dispositivos, sem um objetivo humanista pode se tornar um
empecilho à construção de relações de intimidade com o entorno. Os termos chave do
trabalho, lugares e não lugares (emprestados do geógrafo Yi-fu Tang e do antropólogo
Marc Augé), são propostos como metodologia para a prática de soundwalk, a caminhada
de escuta de paisagens sonoras.
A construção de lugares que Nakahodo propõe, em oposição aos não lugares,
impessoais, fugazes, uniformes, é processada através do contato físico do compositor
com o ambiente, na vivência íntima com o local de estudo, na construção de laços
afetivos e na descoberta (tal qual Ferretti menciona) de diferentes perspectivas de
9
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
escuta. No entanto, Nakahodo parece ir além, quando comenta seus procedimentos de
escuta pré-composicional, num território lindante com o perspectivismo ameríndio:15
Passo a gravar as caminhadas com o intuito de fazer uma peça “grilesca”, um
dos temas preferidos dos paisagistas sonoros, admito. Mas queria que meus
grilos tivessem um tratamento diferente, então pus a gravá-los de todos os
ângulos que me foi possível captar com meu gravador digital portátil, até
imaginar que poderia ser, de fato, um deles. (NAKAHODO, 2014).
A distância entre esta forma de proceder e a procura de modelos
composicionais da eco-composição é considerável. A tecnologia é empregada dentro de
uma metodologia muito diversa, que oferece, em definitiva, um objeto artístico que visa
a relação com um entorno específico.
Análise comparartiva de duas peças
Nesta última seção abordaremos algumas questões da poiética compositiva de
duas peças: touch’n’go/toco y me voy (1998–1999), de Damián Keller e Urbana A2
(2010), instalação do compositor Ulises Ferretti e a artista plástica Cláudia Paim. A
intenção é observar as formas de pensamento por trás de uma produção da música
ubíqua e uma composição motivada por um entorno sonoro que é deslocado para um
local específico.
Para começar, Keller nos informa em seu artigo Compositional Process from
an ecologycal perspective (2000), que touch’n’go/toco y me voy16 pode ser apresentada
em três diferentes formatos: 1) como tape music, em sistema de espacialização de oito
caixas de som; 2) como peça estéreo com hipertext markup language (HTML); e 3)
como uma performance ao vivo com um ator bilíngue ou dois atores + sistema 8 canais.
Já Urbana A2, de Ferretti e Paim, é uma instalação sonora e visual, concebida em
função de um local e um ritual: para ser apresentada na chegada do público a um
concerto da Orquestra de Câmara do Theatro São Pedro (Porto Alegre), aproveitando as
características arquitetônicas do espaço para projetar som e imagem, e o contexto social
específico. Em vez de dois atores declamando um texto, neste caso contamos com dois
músicos realizando uma performance.
Outra diferença marcante é a preocupação de Keller de oferecer a obra em
formato “doméstico”: um CD em que a peça (feita de seções independentes, mas
relacionadas na temática) pode ser ouvida de maneira similar às opções de leitura
Rayuela, de Julio Córtazar, na sequencia disposta pelo autor ou de forma randômica.
Urbana A2, ao contrário é uma peça para ser degustada na circunstância e local
singulares em que fora concebida. O registro em áudio e vídeo são meramente
ilustrativos da experiência que implica, em definitiva, a discusão da cidade num ritual
social. É arte efêmera, difícil de ser reapresentada, que se vivencia e se guarda na
15
O antropólogo Viveiros de Castro emprega esse termo para se referir ao processo xamânico pelo qual
diversos povos brasileiros tradicionais referem sua procura de uma percepção afinada com a percepção
dos animais (ou dos espíritos-animais). (VIVEIROS DE CASTRO, 2004).
16
A peça pode ser ouvida on-line no endereço:
<http://www.earsay.com/earsay/soundshop/earsay/CDs/tng.html> (KELLER, 2014)
Vitória - ES - Brazil
10
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
memória, como uma paisagem sonora que nos pegou de surpresa sem o gravador
portátil.17
Toco y me voy, por outro lado contém partes que foram estruturadas a partir de
um texto literário o conto de Jorge Luis Borges El jardín de los senderos que se
bifurcan. Tanto macro-estruturalmente, como obra aberta no modelo cortazariano,
quanto no interior de algumas partes, baseadas num conto, a obra é concebida em
diálogo com a literatura e talvez (sugerimos isto após a experiência de escuta) com a
estética da peça radiofônica. Não há uma estrutura que possa ser relacionada com a
rememoração de um meio ambiente de características únicas, mas um fluxo de grande
variedade de informações ambientais e culturais, que convivem numa concepção
relacionada à Perspectiva Espacial Variável, se empregarmos a terminologia de Truax
(TRUAX, 2002) ao referir o tipo de soundscape composition mais “esquizofônica”.
O conceito schaferiano de “esquizofonia”18 (isto é, a colocação das sonoridades
de uma paisagem sonora em outro contexto) também pode ser invocado na instalação de
Ferreti e Paim. Amostras de áudio captadas no Túnel Conceição (Porto Alegre) e
filmagens desse e outros entornos da cidade de Porto Alegre são projetados no hall de
entrada e na sala de concertos do teatro. Porém, som e imagem tratam do mesmo
assunto: o trânsito na cidade. Além do quê, esses registros do entorno do túnel são
inseridos numa experiência que reforça a vivência do ritual que está acontecendo no
local. A ideia da performance é refletir sobre o processo de se “desligar” da correria do
dia-a-dia para participar de um ritual estético (um concerto de música clássica). As
pessoas que chegavam eram recebidas por um violinista que, vestido de terno e gravata,
ficava do lado de fora do teatro tocando músicas incompletas. Ao ingressar no hall,
deparavam-se com um flautista, vestido com ropa informal, também tocando trechos de
música. Junto a isso, na fachada do teatro foram projetadas imagens de vídeo com cenas
da cidade, enquanto que todo o hall estava cheio de sons da paisagem sonora do Tunel
Conceição. Finalmente, ao ingressar na sala de concertos, os músicos da orquestra já se
encontravam ensaiando partes do concerto sob o som das paisagens e as imagens de
vídeo projetadas no telão.
A ideia da performance-instalação foi deslocar as situações, colocando o
instrumento mais associado à música erudita (vestido de gala) na rua, no suposto âmbito
da música popular e os instrumentos mais próximos ao repertório popular (flauta doce e
transversal) vestido informalmente no hall do teatro (por sinal, o mais representativo da
tradição oitocentista da cidade). Foi um deslocamento de funções sociais pensado como
forma de gerar uma transição entre os dois ambientes radicalmente opostos e (por que
não?) um certo grau de confusão perceptual.
Esta breve análise dos dois trabalhos serve para reafirmar as características
distintivas das propostas: uma centrada nas possibilidades compositivas que os diversos
materiais ambientais e culturais podem oferecer para uma peça de fundo político, a
outra, vinculada de forma estreita a vivência de dois lugares (como diria Nakahodo),
inventando um dispositivo performático vivo (pessoas tocando instrumentos acústicos)
que gera uma inversão de valores sociais, para que o público, em última instância,
17
Ou mesmo se tivéssemos o gravador. A experiência de escuta da paisagem ao vivo é única e irrepetível
por meios eletrônicos.
18
(R. Murray SCHAFER, 1992)
11
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
reflita sobre as questões sociais que estão implicadas na sua vivência desse lugar e esse
ritual.
A modo de conclusão início
Esta discussão não tem conclusão: é o início de um diálogo possível entre duas
áreas de pesquisa de características diferentes mas que podem se retroalimentar. Os
autores que trabalham sob o conceito de paisagem sonora pretendem continuar,
aparentemente, a basear suas criações pela intencionalidade “evocativa” dos entornos,
pretendem trazer à tona, de diferentes maneiras, a relação do ser humano com o meio
ambiente. Os autores que tratam da computação ubíqua aplicada a criação musical
dedicam-se a mostrar como a tecnologia abre novas portas à metodologia de
composição musical e instigam novas interações sociais. Pretender englobar as
pesquisas de uma área no âmbito da outra não contribui ao desenvolvimento do
conhecimento, é uma tentativa estéril de esmagar as divergências: a tentativa de compor
fundamentado antes que nada na observação atenta dos fenômenos espontâneos do
mundo (um processo de captura de algo “externo”) ou a dedução de modelos para uma
forma de composição de caráter talvez mais “interno”. O foco no estudo do meio
ambiente dos “paisagistas” pode alimentar a descoberta de modelos compositivos para
os “ubiquos” e a pesquisa tecnológica pode trazer novas ferramentas para transmitir as
sensações ambientais, por exemplo (como aponta Ferretti),19 para emular as diferentes
perspectivas de percepção ambiental numa instalação sonora. Reconhecer as diferenças
talvez seja o caminho para crecermos juntos.
Post-script
Após a conclusão do presente artigo o compositor Damián Keller ofereceu
novas leituras para complementar o texto e que colocam em questionamento algumas
afirmações do presente artigo. A partir do artigo Composing with Soundscapes: an
Approach Based on Raw Data Reinterpretation (GOMES et al., 2014) tomamos novos
conhecimento de preocupações da música ubiqua em relação a questões relacionadas
intimamente aos fundamentos estéticos da soundscape composition: a preocupação por
trabalhar a composição com base em dados históricos, etnográficos e geográficos, algo
comum, de qualquer maneira em trabalhos de paisagistas sonoros de outras vertentes. O
problema, no entanto, é saber de que forma esses dados são usados no trabalho
compositivo. O artigo menciona, por outro lado, ferramentas tecnológicas para a coleta
de dados geográficos, de grande utilidade para o trabalho dos compositores,
desenvolvidas no projeto URB.20
Outro aspecto que deve-se destacar é a relação da eco-composição com outras
vertentes ecológicas, como o ecoestruturalismo, que trabalha a partir de padrões da
“análise de entornos naturais que revelam [em definitiva] estruturas [intrínsecas] nos
próprios materiais.” (GOMES et al., 2014)); assim como a forte relação da ecocomposição com as propostas de Agostino Di Scipio, Mathew Burtner e David
Monacchi. Estes dois últimos (com os quais já tinhamos certa familiaridade
anteriormente), coincidem no uso do termo ecoacoustic para definir seu trabalho
19
20
Observação do compositor em conversa via vídeo conferência.
http://www.urb.pt.vu/
Vitória - ES - Brazil
12
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
criativo, manifestando um engajamento explícito no ativismo ambiental, principalmente
na escolha de ecossistemas em crise como locais de pesquisa: a Floresta Amazônica e as
regiões geladas do Ártico. O trabalho de Monacchi, sobretudo, consegue evocar de
forma muito convincente a sensação de escuta inserido na floresta tropical, empregando
os “drones” eletrônicos para reforçar os ritmos e a espectromorfologia presente no
espaço estudado.21
Finalmente, as leituras revelaram o uso de conceitos da eco-composição em
trabalhos com meios acústicos do compositor Rick Nance, como presente em sua tese
Compositional explorations of plastic sounds (NANCE, 2014). Isto é, os fundamentos
teóricos e os procedimentos da eco-composição podem perfeitamente serem aplicados a
propostas que fogem do campo da música ubiqua, se entendemos esta última como uma
extensão da computação ubiqua.
Referências bibliográficas
ABLINGER, P. Peter Abliger web site.
ATIENZA, R. (2008). Identidad sonora urbana. Les 4èmes Jounées Europénnes de la
Recherche Architecturale et Urbaine EURAU'08 : Paysage Culturel, 1, 1-13.
BARRIOS, I., & RODRÍGUEZ, J. D. G. (2005). Calidad acústica urbana: influencia de
las interacciones audiovisuales en la valoración del ambiente sonoro. Medio
Ambiente y Comportamiento Humano, 8, 101-117.
CAGE, J. (1973). Silence. Lectures and Wirtting by John Cage. (1º ed.). Hannover:
Wesleylan University Press.
Clarke, E. (2005). Ways of listenings. An ecological approach of musical meaning.
(First ed. Vol. 1). New York: Oxford University Press New York.
de ARAUJO, R. B. (2003). Computação ubíqua: princípios tecnologias e desafios.
Paper presented at the XXI Simpósio Brasileiro de Redes de Computadores,
Natal.
FERRAZ, S., & KELLER, D. (2012). Preliminary proposal of the MDF model of
collective creation (MDF: Proposta preliminar do modelo dentro-fora de
criação coletiva). Paper presented at the Proceedings of the III Ubiquitous
Music Workshop (III UbiMus). São Paulo.
FERRETTI, U. (2006). Entorno sonoro del cotidiano. Cinco piezas instrumentales.
(Mestrado Dissertação), Universidade Federal do Rio Grande do Sul, Porto
Alegre.
Retrieved
from
http://www.lume.ufrgs.br/bitstream/handle/10183/6523/000531300.pdf?sequenc
e=1
FERRETTI, U. (2011). Entornos sonoros. Sonoridades e ordenamentos. (Doctorado
Tesis), Universidade Federal do Rio Grande do Sul, Porto Alegre. Retrieved
from
http://www.lume.ufrgs.br/bitstream/handle/10183/35083/000794283.pdf?sequen
ce=1
FREIRE, S. (2007). Pandora: uma caixa clara tocada a distância. Paper presented at
the Simpósio Brasileiro de Computação Musical, São Paulo.
21
13
Audios disponíveis no web site do autor: <http://www.davidmonacchi.it/> (MONACCHI, 2014)
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
GIBSON, J. (1950). The perception of the visual world (H. Mifflin Ed.).
Massachussetts: The Riverside Press.
GOMES, J. A., de Pinho, N. P., LOPES, F., COSTA, G., DIAS, R., & BARBOSA, Á.
(2014).
Composing with Soundscapes:
an Approach Based on Raw Data
Reinterpretation - xcoax2014-Gomes.pdf. Paper presented at the Second
conference on Computation, Communication, Aesthetics and X, Porto.
http://2014.xcoax.org/pdf/xcoax2014-Gomes.pdf
GUILMURRAY, J. (2014). ECOACOUSTICS: Ecology and Environmentalism in
Contemporary
Music
and
Sound
Art.
https://www.academia.edu/2701185/ECOACOUSTICS_Ecology_and_Environ
mentalism_in_Contemporary_Music_and_Sound_Art
KELLER, D. (Producer). (2014). Touch n Go. [Music] Retrieved from
http://www.earsay.com/earsay/soundshop/earsay/CDs/tng.html
KRAUSE, B. (2013). A grande orquestra da natureza. Descobrindo a origem da
música no mundo selvagem. (I. W. Kuck, Trans. 1º ed. Vol. 1). Rio de Janeiro:
Jorge Zahar Editor.
MONACCHI, D. (2014). David Monacchi - Sound design. from
http://www.davidmonacchi.it/
NAKAHODO, L. (2014). CARTOGRAFIAS SONORAS: Um estudo sobre a produção
de lugares a partir de práticas sonoras
contemporâneas. (Mestrado), Universidade Federal do Paraná, Curitiba.
NANCE,
R.
(2014).
Plastic
Music
and
Aural
Models.
https://www.academia.edu/1943681/Plastic_Music_and_Aural_Models
OLIVEIRA, A. L. G. d. (2011). Paisagem Sonora como obra híbrida: espaço e tempo na
produção imagética e sonora. Semeiosis.
ROTHENBERG, D., & ULVAEUS, M. (2001). The book of music and nature (D.
Rothenberg & M. Ulvaeus Eds. 2º ed. Vol. 1). Middletown: Wesleylan
University Press.
SCHAFER, R. M. (1992). O ouvido pensante (Primeira ed.). São Paulo: Fundação
Editora da UNESP.
SCHAFER, R. M. (1993). Voices of Tyranny (J. Donelson Ed. Second ed.). Ontario:
Arcana Editions.
SCHAFER,
R.
M.
(2014).
Princess
of
the
Stars.
from
http://www.patria.org/pdp/CHAOS/POS/PRINCESS.HTM
TRUAX, B. (2002). Genres and Techniques of Soundscape Composition as developed
at Simon Fraser University.
Retrieved 21/04/2014, 2014, from
http://www.sfu.ca/~truax/OS5.html
TRUAX,
B.
(2014).
Soundscape
Composition.
from
http://www.sfu.ca/~truax/scomp.html
VILLENA, M. (2013). Paisagens sonoras instrumentais. Um processo compositivo
através da mímesis de sonoridades ambientais. (Mestrado Dissertação),
Universidade
Federal
do
Paraná,
Curitiba.
Retrieved
from
http://www.humanas.ufpr.br/portal/artes/files/2013/04/Disserta%C3%A7%C3%
A3o-Marcelo-Ricardo-Villena-2013.pdf
VIVEIROS DE CASTRO, E. (2004). Pespectivismo e naturalismo. O que nos faz
pensar?, 18, 225-254.
Vitória - ES - Brazil
14
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Exploring the potential of mobile technology for creating
music collaboratively
Yeray Real-Delgado, Didac Gil de la Iglesia, Nuno Otero
Department of Media Technology, Linnaeus University, Sweden
[email protected], [email protected],
[email protected]
Abstract. We will present the first prototype of a framework that supports
collaborative music creation activities using short distance-location aware
mobile technology. In order to explore the corresponding design space we are
planning to run a series of workshops with practitioners to elicit knowledge,
find likes and dislikes. Such activities will frame the creation of new features.
This is part of a long-term goal to explore how mobile technologies can enable
the emergence of ubiquitous music activities.
1. Framing the idea and creating the first prototype
According to Keller et al. (2011), recent digital technology developments and artistic
explorations put the creation of sonic products beyond the traditional frames of learning
to play musical instruments and accompanying social practices. Keller et al. (2011) see
interactive installations, performance art, eco-composition, co-operative composition,
mobile music and network music as instantiations of what they call ubiquitous music.
Ubiquitous music research intends to investigate the social practices involved in these
activities and create new ensembles of artifacts to support them. We believe that this
conceptual framework proposed by Keller et al. (2011) is useful for researchers and
practitioners alike in order to give full account of the current and emerging music
practices.
Considering the domain, collaborative music creation using mobile technology within
the ubiquitous music research framing, our research focuses on the following research
questions:
•
How can relative-position aware mobile technology support collaborative music
creation activities, such as mixing and sequencing sounds?
•
Which interactions occur when the proposed system is deployed and the users
are able to edit, mix and sequence sounds?
During the last months, we have implemented a number of mobile application
prototypes with the aim of studying the potential benefits that mobile devices can bring
in the field of ubiquitous music. The different prototype versions are the results of
iterative software development process that has included co-design phases and users'
studies. Currently, the latest version of the mobile application, which will be presented
during the event, incorporates interaction choices that reflect the results from our
previous studies. Some examples of interaction choices include visualization elements
to provide a user's friendly experience, software mechanisms to support collaboration in
15
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
a transparent manner and novel usages of emerging communication technologies to
provide location context awareness.
More concretely, the current mobile prototype is based on the following technologies;
Near Field Communication (NFC) and Multi-Agent System (MAS). NFC is a wireless
technology of short-range-communication that enables its user to connect the physical
world with the virtual (Want, 2011). This technology is being applied to provide
location awareness to the application, and to allow rich interactions based on the
specific placement of the mobile devices. In order to support real-time collaboration
through the mobile devices, the prototype uses a Multi-Agent System (aka, MAS)
framework (Wooldridge, 2002) and recent developments based on previous efforts on
mobile collaboration (Gil et al., 2014). The solution allows continuous communication
between the mobile devices for coordination and collaboration, which could enable the
users to perform individual tasks (e.g., recording audio samples to be used in a music
composition, the selection of an instrument to be emulated via the mobile device and
personal configuration parameters, such as the device volume) as well as to perform
collaborative tasks between the participants in the activity (e.g., defining, for each
mobile device, its initial and ending time for playing a sound within the music
composition, and providing a tangible interface for discussion the structure of the music
composition).
2. Future steps
At this current stage, we have run three user studies with novices in the field of music
composition. Through these studies, we have been able to identify a number of features
that the combination of connected mobile devices and NFC technologies can provide for
music creation and to enrich the learning discussions in such area. We are planning to
run workshops with music experts/practitioners with different levels of formal music
education in order to explore what features these users judge to be more useful and
engaging from an their perspective. We expect that an efficient and engaging system
will not only be more readily adopted but will also promote creative outcomes. Our
initial results support this statement, and we believe that our current and future efforts
will provide stronger evidences of the benefits of our proposed solution.
References
Gil de la Iglesia D, et al. (Forthcoming) (2014). A Self-Adaptive Multi-Agent System
Approach for Collaborative Mobile Learning. Under submission process in
Transactions on Learning Technologies.
Keller et al. Convergent trends toward ubiquitous music. Journal of New Music
Research (2011) vol. 40 (3) pp. 265-276.
Want, R. (2011). Near Field Communication. IEEE Pervasive Computing. vol. 10, n. 3
pp. 4-7, July-September.
Wooldridge M. (2009). An Introduction to MultiAgent Systems (2nd ed.). John Wiley &
Sons. ISBN-13: 978-0470519462
Vitória - ES - Brazil
16
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
DroneUnknown: An experiment in embracing
unpredictability in live electronic performance
Patrick McGlynn
National University of Ireland, Maynooth
[email protected]
Abstract. DroneUnknown aims to explore this ambiguous space by
capitalising on the unpredictability of self­modifying instruments, rather than
trying to restrain it. The program runs on intunative (the author's own multi­
touch performance platform ­ formerly known as Oscar) and draws random
source material from a bank of samples at initialisation­time. The player must
navigate through the material live during performance and discover it as the
audience does. This leads to a man­machine duel with a number of possible
approaches, ranging from a tentative exploration of the programs state to
much more aggressive journeys. The presentation will feature a discussion of the gestural tools available to the
performer and conclude with a live demonstration of DroneUnknown. Further
information on the performance software can be found at intunative.com.
17
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
V Workshop em Música Ubíqua (V UbiMus), Vitória, ES, 29 de outubro a 3 de novembro de 2014
Suporte para a Criatividade Musical Cotidiana: mixDroid
Segunda Geração
Flávio Miranda de Farias1,2,3, Damián Keller1, Floriano Pinheiro da Silva1,
Marcelo Soares Pimenta2, Victor Lazzarini3, Maria Helena de Lima2, Leandro
Costalonga4, Marcelo Johann2
1
2
NAP, Universidade Federal do Acre (UFAC) e Instituto Federal de Ciência e
Tecnologia do Acre (IFAC) – Rio Branco, AC – BR
Instituto de Informática e Colégio de Aplicação – Universidade Federal do Rio Grande
do Sul (UFRGS) – Porto Alegre, RS – BR
3
4
National University of Ireland, Maynooth
Nescom, Universidade Federal de Espírito Santo, São Mateus, ES – BR
[email protected], [email protected]
Abstract.
This paper focuses on mixing as the object of study of creativity-centered interaction
design. We applied the time tagging metaphor to develop a new ubiquitous music
prototype and carried out experimental work to investigate the relationships between
the technological support strategies and their creative yield. A musician produced 30
sound mixes using different tools and similar sound resources in the same location.
From that output, three creative products – each of approximately 3 minutes – were
chosen. For the first session he used the sound editor Audacity. The second session
was done with the ubiquitous music system mixDroid 1.0 or first generation (1G). The
third session involved the use of a new prototype – mixDroid 2.0 or second
generation (2G). The time invested on each mix was: 97 minutes using Audacity; 6:30
minutes using mixDroid 1G; and 3:30 minutes using mixDroid 2G. 24 subjects
evaluated the three products through the Creative Product Profile (CrePP-NAP)
protocol. Results indicated very similar profiles for the mixDroid 1G and 2G
products. On a scale of -2 to +2, differences weren't larger than 17 cents. Scores for
the descriptors 'relaxing' and 'pleasant' were 0.96 and 1.42 points higher for the
Audacity-made product, but variations among scores were also high. Originality and
expressiveness were slightly higher for Audacity – 21 and 42 cents respectively. In
contrast, the relevance factor of the mixDroid 2G product was 25 cents higher than
the score given to the Audacity product. We conclude that the application of the time
tagging metaphor boosts the efficiency of the creative activity, but that boost does not
extend to the creativity profile of the products.
Resumo. Adotamos a atividade de mixagem como objeto de estudo do suporte
tecnológico necessário para atividades criativas musicais. Mais especificamente,
aplicamos a metáfora de marcação temporal [Keller et al. 2010] – ou time tagging –
como forma de utilizar as pistas sonoras existentes no ambiente para determinar os
tempos de ataque dos eventos sonoros durante a mixagem. Realizamos um estudo
experimental de caráter exploratório com o objetivo de investigar a relação entre a
infraestrutura de suporte e os resultados criativos. Um músico executou trinta
mixagens utilizando os mesmos recursos materiais (amostras sonoras e local de
realização) em três condições diferentes. Na primeira sessão foi usado um editor de
Vitória - ES - Brazil
18
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
V Workshop em Música Ubíqua (V UbiMus), Vitória, ES, 29 de outubro a 3 de novembro de 2014
áudio para dispositivos estacionários: Audacity. Na segunda foi usada a ferramenta
musical ubíqua mixDroid 1.0 [Radanovitsck et al. 2011]. Para a terceira sessão foi
implementado um novo protótipo embasado na metáfora de interação marcação
temporal: mixDroid 2G ou segunda geração. Desses resultados foram escolhidos três
produtos criativos de aproximadamente 3 minutos de duração. O tempo de realização
de cada uma das mixagens foi: 97 minutos – Audacity; 6,5 minutos mixDroid 1G; 3,5
minutos mixDroid 2G. 24 sujeitos leigos avaliaram as mixagens utilizando a
ferramenta CrePP-NAP de aferição do perfil do produto criativo. Os resultados
indicam que os produtos criativos obtidos com mixDroid 1G e 2G são similares. Não
observamos diferenças maiores do que 17 centésimos numa escala de -2 a +2. Já a
aferição dos produtos criativos realizados com o editor Audacity apontou diferenças
nos descritores ´relaxante´ e ´agradável´, ficando entre 1,42 e 0,96 pontos acima dos
escores dados aos produtos feitos com mixDroid 1G e 2G. No entanto, a
variabilidade das respostas também foi alta. Os itens originalidade e expressividade
foram levemente superiores nas avaliações do produto feito com Audacity (21 e 42
centésimos respectivamente). Mas no fator relevância, o produto obtido com
mixDroid 2G teve uma média de 25 centésimos acima da média dada à mixagem
realizada com Audacity. Concluímos que a aplicação da metáfora de marcação
temporal aumenta significativamente a eficiência da atividade criativa, porém os
produtos não são necessariamente mais criativos que os produtos resultantes de
estratégias de suporte assíncrono.
1. Introdução
Na pesquisa em música ubíqua, abordamos o problema do suporte à criatividade
aplicando três estratégias experimentais: (1) estudos das atividades prévias ao produto
criativo, (2) estudos das atividades realizadas durante a geração do produto, (3) estudos
de aferição dos resultados obtidos [Keller et al. 2013a]. A primeira categoria abrange os
estudos de design de suporte tecnológico para atividades criativas [Lima et al. 2012;
Pimenta et al. 2012]. O foco desse tipo de pesquisa é entender as implicações das
decisões de design e as demandas e o impacto nos recursos materiais e sociais utilizados
durante o processo criativo. A segunda categoria é ativamente desenvolvida na área de
interação humano-computador e envolve a observação das ações dos participantes
durante atividades criativas, com ênfase nos aspectos funcionais e utilitários do suporte
à interação [Keller et al. 2010; Keller et al. 2013b; Radanovitsck et al. 2011; Pinheiro et
al. 2012; Pinheiro et al. 2013; Pimenta et al. 2013]. A terceira categoria foca a
observação de aspectos da criatividade através da aferição dos produtos criativos. Neste
artigo relatamos os resultados da utilização de produtos criativos para comparar o perfil
de suporte à criatividade cotidiana, utilizando como estudo de caso a atividade de
mixagem.
O problema enfocado neste trabalho pode ser separado em dois aspectos.
Primeiro temos a questão da aferição do suporte a atividades criativas. Nas atividades
assíncronas o fator temporal – relacionado à eficiência do suporte – não é o principal
determinante da qualidade da interação [Miletto et al. 2011]. Já nas atividades síncronas
o fator temporal deveria ter um grande impacto na qualidade da interação. Por esse
motivo, quando a atividade criativa é adotada como objeto de estudo, os mecanismos
síncronos e assíncronos não são comparáveis. Para viabilizar a comparação entre
sistemas de suporte síncronos e assíncronos, aferimos os produtos criativos resultantes
das atividades em lugar de focar a observação das atividades em si. A segunda questão é
se os sistemas assíncronos permitem obter resultados qualitativamente melhores do que
19
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
V Workshop em Música Ubíqua (V UbiMus), Vitória, ES, 29 de outubro a 3 de novembro de 2014
os sistemas síncronos. Se a resposta for afirmativa, a quantidade de tempo investida na
atividade deveria ter impacto no perfil dos produtos criativos. Os dois problemas
conceituais levantados – a dificuldade de comparar atividades criativas síncronas e
assíncronas e a relação entre o tempo investido no ciclo criativo e o perfil dos produtos
criativos resultantes – indicam a necessidade de separar a metodologia em duas partes.
Inicialmente precisamos coletar dados sobre o tempo investido na atividade, mantendo
padronizados a duração dos produtos e as condições experimentais em múltiplas
sessões. Seguidamente, podemos aferir os produtos obtidos. Os resultados dessa
aferição nos permitem avaliar de forma indireta o impacto do sistema de suporte e a
relação entre o tempo investido e o perfil dos produtos criativos.
O texto a seguir está divido em três seções. Na primeira apresentamos as
características principais das três ferramentas utilizadas no estudo experimental.
Discutimos a motivação para o desenvolvimento de um novo aplicativo de suporte para
atividades criativas em contexto ubíquo e fornecemos exemplos sucintos de uso em
atividades síncronas e assíncronas. Na segunda seção descrevemos os procedimentos de
geração e escolha de material sonoro utilizado no experimento descrito na terceira
seção. Seguidamente descrevemos o protocolo aplicado para aferir o perfil dos três
produtos criativos e apresentamos os resultados obtidos em duas sessões experimentais
envolvendo 24 sujeitos leigos. Com base nesses resultados, analisamos as implicações
dos perfis dos produtos criativos e indicamos as limitações e as perspectivas abertas
pela comparação entre atividades criativas síncronas e assíncronas.
2. A marcação temporal em mixDroid 1G e mixDroid 2G
Com o intuito de viabilizar as atividades criativas em contexto ubíquo, Keller e
coautores (2010) sugeriram o desenvolvimento de metáforas de interação baseadas no
mecanismo cognitivo de ancoragem. Como prova de conceito foi desenvolvida a
primeira geração de protótipos mixDroid [Radanovitsck et al. 2011] no sistema
operacional aberto Android para dispositivos portáteis. O protótipo mixDroid 1.0 (ou
1G ou clássico) permite combinar sons em tempo real através de um teclado virtual com
nove botões acionados pelo toque na tela sensível. A atividade de mixagem está baseada
no disparo de sons através de botões e no registro dos tempos de acionamento. Dado
que o controle se limita a um único parâmetro (o tempo), as habilidades exigidas estão
muito aquém das aplicadas na execução de um instrumento acústico, não dependem de
um sistema simbólico a ser aprendido, e podem ser aprimoradas em função das
características do material sonoro utilizado. Esse mecanismo permite a execução rápida
de até nove sons, dependendo exclusivamente da pré-configuração da matriz de sons
que é construída durante a atividade de seleção, através do carregamento de cada
amostra individualmente para cada botão da interface. Devido à adoção do formato de
áudio estéreo, o resultado de uma sessão pode ser reutilizado como amostra dentro de
uma nova sessão, de forma similar ao processo de overdubbing usado nos sistemas
analógicos de gravação (figuras 1 e 2).
Vitória - ES - Brazil
20
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
V Workshop em Música Ubíqua (V UbiMus), Vitória, ES, 29 de outubro a 3 de novembro de 2014
Figuras 1 e 2. mixDroid 1G [Radanovitsck
et al. 2011].
Figura 3 e 4. mixDroid 2G tela de
mixagem.
O protótipo mixDroid 2G versão 2.0 beta amplia as possibilidades de aplicação
da metáfora de marcação temporal introduzindo novas funcionalidades na leitura e
gravação dos dados sonoros (figuras 3 e 4). Para o desenvolvimento nativo no sistema
operacional Android foi utilizada a linguagem de programação Java. A configuração de
layout da interface e o sistema de marcação temporal foram implementados em XML
(Extensible Markup Language – XML 2014). A plataforma de desenvolvimento (IDE) e
o kit de desenvolvimento (SDK) Android foram Eclipse 4.3.1 para Windows e Android
4.4.2 (API 19), respectivamente.
A primeira geração de protótipo mixDroid foi desenvolvida quando o sistema
operacional não tinha suporte para manipulação de arquivos ou para manipulação de
áudio em tempo real. A análise de múltiplos estudos de caso e a coleta de informações
com usuários de mixDroid 1G indicaram a necessidade de atualizar o código-base
fornecendo uma nova versão que incorporasse os avanços do sistema operacional no
suporte à portabilidade dos arquivos, a ampliação da documentação de desenvolvimento
facilitando a atualização, e a aplicação estrita da estrutura hierárquica orientada a
objetos [Keller et al. 2013b; Pinheiro et al. 2013; Pimenta et al. 2013]. Porém, levando
em conta a necessidade de manter a compatibilidade retroativa, somente foram
adicionadas bibliotecas dentro do perfil de requisitos da versão Android 1.6 (API 4).
21
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
V Workshop em Música Ubíqua (V UbiMus), Vitória, ES, 29 de outubro a 3 de novembro de 2014
A tabela 1 fornece um quadro das funcionalidades da segunda geração em comparação
com a primeira geração de protótipos. Entre os vários módulos e funções desenvolvidos
com o objetivo de aumentar a usabilidade, tem destaque a licença de leitura e gravação
primária no sistema de armazenamento externo do aparelho (SD-card). Nesse
dispositivo são criadas duas pastas ao iniciar sessão: (1) MixdroidSongs, onde deverão
ser adicionados os arquivos que serão listados e executados pelo reprodutor de áudio; e
(2) MixdroidRecords que é responsável pelo armazenamento dos arquivos de mixagen
(ver figuras 4 e 5). Ambas pastas podem ser posteriormente alteradas pelo usuário
acessando a tela de configurações.
Tabela 1. Tabela expositiva de características da primeira e da segunda geração de mixDroid.
Características
mixDroid 1G
mixDroid 2G
Formatos de leitura
Formatos de armazenamento
Quantidade de arquivos manipulados
simultaneamente
Versão Android/API retro compatibilidade
Gerenciamento de diretórios de leitura
e/ou gravação
Captura de som via microfone
Exportação de mixagens
wav
Banco de dados interno
Android 1.6/API 4
wav, mp3, ogg, entre outros
XML, WAV, MP3
Limitado pela capacidade de memória
RAM do dispositivo
Android 1.6/API 4
não
sim
não
não
sim
sim
Carregamento automático por seleção
de pastas
sim (no formato XML)
não
No máximo 9 por sessão
Seleção de arquivos de áudio
Individual para cada música
Visualização do histórico de gravação
Desinstalação limpa
sim (em forma de animação)
sim
Necessita de pré-instalação de
software de terceiros
Instalação
Direta e sem pré-requisitos
Uma das ferramentas de código aberto mais utilizadas atualmente no trabalho de
edição e mixagem de áudio é o editor para dispositivos estacionários Audacity [Mazzoni
e Dannenberg 2000]. A interface para o trabalho de mixagem adota a metáfora da fita,
onde as amostras de áudio são visualizadas em trilhas, fornecendo suporte visual para as
operações de posicionamento dos eventos no eixo temporal. Essa metáfora de interação
é útil em dispositivos com tela ampla e boa disponibilidade de CPU, no entanto
apresenta limitações em dispositivos com tela pequena ou com recursos limitados, já
que a maioria das operações de áudio são acompanhadas por atualizações na
representação visual dos dados sonoros. Levando em conta esse perfil, o ambiente
natural de uso dos editores que adotam esse tipo de metáfora é o ambiente de estúdio.
Durantes pesquisas comparativas foi encontrada uma ferramenta livre para
navegadores de Internet chamada FreeSounds (2014) 1. Hospedada em site de nome
semelhante, ela oferece uma grande gama de opções e recursos semelhantes ao do
mixDroid 2G, porém em nível mais avançado. Devido a que o foco do FreeSounds não
é a mixagem propriamente dita, acaba comprometendo o objetivo principal do projeto,
que é ter uma interface simples voltada para usuários leigos que utilize poucos recursos,
com suporte offline e centrada na portabilidade. Esses itens vêm sendo indicados como
requisitos básicos das ferramentas musicais ubíquas [Keller et al. 2011a; Pimenta et al.
2012]. Estudos futuros poderão estabelecer quais itens são mais relevantes para o
suporte à criatividade musical cotidiana, e se é viável incorporar ferramentas que
dependam da conectividade à rede, como é o caso do FreeSounds.
1
Agradecemos ao revisor anônimo do V UbiMus por ter apontado as similaridades entre mixDroid e FreeSounds.
Vitória - ES - Brazil
22
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
V Workshop em Música Ubíqua (V UbiMus), Vitória, ES, 29 de outubro a 3 de novembro de 2014
3. Procedimento de geração de produtos criativos: minicomps
Tendo apresentado as principais características dos ambientes de suporte, nesta
seção descrevemos o método utilizado para a geração dos produtos criativos, e
fornecemos dados sobres as amostras sonoras utilizadas, as ferramentas de suporte para
mixagem e os resultados sonoros obtidos. Adotamos os mesmos procedimentos
utilizados em estudos anteriores: as minicomps ou mini-composições [Keller et al.
2011b]. As minicomps propõem a realização de um ciclo criativo completo em uma
única sessão, permitindo a quantificação do tempo de atividade criativa.
3.1. Amostras sonoras
Os materiais sonoros usados nas mini-composições foram gravados em formato estéreo,
com taxa de amostragem de 44.1 kHz e resolução de 16 bits, utilizando um gravador
digital portátil profissional e um microfone direcional cardioide de tipo condensador. A
edição e segmentação foram realizadas no editor Audacity. Do material coletado foram
selecionadas nove amostras sonoras gravadas em três ambientes diferentes, abrangendo
sons urbanos, sons de animais e sons domésticos. Descrições detalhadas desse material
estão disponíveis em [Keller et al. 2013b; Pinheiro et al. 2012, 2013]
Tabela 2. Amostras sonoras.
Amostras
Carro 01
Carro 02
Carro 03
Carro 04
Cozinha 01
Cozinha 02
Cozinha 03
Rã 01
Rã 02
Formato
Som Wave
Som Wave
Som Wave
Som Wave
Som Wave
Som Wave
Som Wave
Som Wave
Som Wave
Tamanho
796 KB
312 KB
326 KB
649 KB
1.454 KB
1.501 KB
3.068 KB
3.406 KB
643 KB
Tipo de amostra
sons urbanos
sons urbanos
sons urbanos
sons urbanos
sons domésticos
som domésticos
som domésticos
sons de animais
sons de animais
3.2. Ferramentas
Os sistemas utilizados foram:
 Audacity versão 2.3 para Windows, rodando em computador portátil, com
mouse óptico e teclado QWERTY padrão;
 mixDroid versão 1.0 rodando em tablet nacional Coby de 7 polegadas com
sistema operacional Android 2.2 (figura 2);
 mixDroid versão 2.0 rodando em tablet nacional Coby de 7 polegadas com
sistema operacional Android 2.2 (figura 4).
3.3. Procedimentos
Para a aferição dos produtos criativos, um músico com experiência no uso das três
ferramentas produziu – utilizando as amostras listadas na seção anterior – trinta
mixagens de aproximadamente 3 minutos de duração. Das mixagens aprovadas pelo
músico como resultados satisfatórios, foram escolhidas três mixagens correspondentes
às médias do tempo de execução com cada ferramenta.
23
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
V Workshop em Música Ubíqua (V UbiMus), Vitória, ES, 29 de outubro a 3 de novembro de 2014
Figura 5. Músico criando uma minicomp no mixDroid 2G.
3.4. Resultados das minicomps
Como é possível observar na figura 4, o tempo de produção das mixagens por um
usuário profissional foi de aproximadamente uma hora e meia com Audacity, em
contraste com a média de três minutos e meio com mixDroid 2G, e de seis minutos e
meio com mixDroid 1G. Esses resultados são consequentes com as características dos
sistemas de suporte descritas na primeira seção deste artigo. Audacity fornece suporte
para atividades assíncronas e mixDroid dá suporte para atividades síncronas. Portanto o
tempo de realização das mixagens é levemente superior ao tempo total do produto
sonoro, de 150% a 200% no caso da ferramenta mixDroid. Já o tempo de realização
com o sistema de suporte assíncrono Audacity supera em mais de 30 vezes o tempo do
produto sonoro. Dada essa diferença no investimento temporal na atividade, espera-se
que os resultados obtidos de forma assíncrona sejam muito superiores aos resultados da
atividade síncrona. Uma forma de verificar se essa hipótese é correta envolve a aferição
dos produtos criativos obtidos com cada uma das ferramentas. O objetivo do estudo
descrito na terceira seção deste artigo é determinar se o investimento temporal na
atividade criativa pode ser correlacionado com o perfil dos produtos obtidos.
4. Aferição dos produtos criativos
Nesta seção descrevemos os procedimentos utilizados para aferir os produtos criativos
obtidos através do protocolo minicomps. O objetivo do experimento é comparar os
descritores vinculados ao nível de criatividade de cada produto. Os dados obtidos nesta
fase da pesquisa servirão para determinar se as estratégias de suporte aplicadas no
design de interação das três ferramentas adotadas têm impacto nos resultados sonoros
produzidos por um usuário experiente.
Vitória - ES - Brazil
24
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
V Workshop em Música Ubíqua (V UbiMus), Vitória, ES, 29 de outubro a 3 de novembro de 2014
Figura 6. Gráfico comparativo do tempo de produção de mixagens em três ambientes de suporte:
Audacity, mixDroid 1G e mixDroid 2G.
4.1. Localização das sessões de aferição dos produtos criativos
Todo o processo experimental foi realizado no Instituto Federal do Acre (IFAC),
Campus Rio Branco. As duas sessões experimentais aconteceram dentro das salas de
aula do curso de ensino médio integrado ao técnico em informática (sessão 1, sala 104)
e dos cursos técnicos de informática (sessão 2, sala 107). As salas têm
aproximadamente 15 metros de largura por 20 metros de comprimento, sistema de
climatização modelo Split, e carecem de tratamento acústico.
4.2. Perfil dos sujeitos
A aferição dos produtos criativos contou com a participação de 24 sujeitos com idades
entre 17 e 55 anos; escolaridade média = 11 a 10 anos; e estudo musical entre 0 e 10
anos. Todos os sujeitos tiveram alguma experiência prévia em uso de tecnologia. 90%
dos sujeitos fez uso de telefone celular durante os últimos 5 anos. Vinte e três dos vinte
quatro sujeitos tiveram experiências com ferramentas multimídia (como YouTube e
MediaPlayer). Dois sujeitos afirmaram possuir conhecimento de linguagens de
programação e de tecnologias desenvolvidas para fins musicais, incluindo o editor
Audacity.
4.3. Procedimentos de aferição
Para aferir o perfil criativo das três mixagens escolhidas, foi utilizado o protocolo
Creative Product Profile ou Perfil do Produto Criativo (CrePP-NAP v.04). Ao longo de
múltiplos estudos preliminares foram ajustados: o número de fatores, a escala de
aferição, e o tipo de dados pessoais coletados durante a sessão. Na sua versão 0,04 o
CrePP consiste em um formulário eletrônico que avalia o produto através de cinco
descritores – bem feito, original, expressivo, relaxante, agradável – e inclui um campo
para observações por parte dos sujeitos [Barbosa et al. 2010; Keller et al. 2011b]. A
escala de aferição é de -2 a +2. Para fins de aplicação, as perguntas foram impressas em
folhas de papel e o questionário foi apresentado a todos os sujeitos de forma simultânea.
A atividade de aferição foi dividida em duas partes. Na primeira parte foi
apresentado um questionário sobre a experiência do sujeito com tecnologia e sobre seus
conhecimentos musicais. As mixagens foram tocadas para o grupo de alunos uma única
vez por sala, na seguinte sequência: produto do Audacity, depois produto do mixDroid
1G e por último produto do mixDroid 2G. Ao fim de cada execução, os participantes
25
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
V Workshop em Música Ubíqua (V UbiMus), Vitória, ES, 29 de outubro a 3 de novembro de 2014
preencheram o formulário. As aferições realizadas totalizaram 72 para os três produtos
criativos.
Figura 7. Sujeitos preenchendo formulário de avaliação das mini composições feitas com
Audacity, mixDroid 1G e 2G.
4.4. Resultados
Os resultados indicam que os produtos criativos obtidos com mixDroid 1G e 2G têm um
perfil similar. Não observamos diferenças maiores do que 17 centésimos numa escala de
-2 a +2. Já a aferição dos produtos criativos realizados com o editor Audacity apontou
diferenças nos descritores relaxante e agradável, ficando entre 1,42 e 0,96 pontos acima
dos escores dados aos produtos feitos com mixDroid 1G e 2G. No entanto, a
variabilidade das respostas também foi alta. Os itens originalidade e expressividade
foram levemente superiores nas avaliações do produto feito com Audacity (21 e 42
centésimos respectivamente). Mas no fator qualidade (descritor: bem feito), o produto
obtido com mixDroid 2G teve uma média de 25 centésimos acima da média dada à
mixagem realizada com Audacity.
Figura 8. Comparação entre os perfis dos três produtos.
Numa análise mais apurada, o perfil do produto criativo obtido com Audacity
mostra resultados similares para os fatores relevância (se é bom), originalidade e prazer.
Já os fatores vinculados a relaxamento e agradabilidade ficaram abaixo dos outros
escores.
Vitória - ES - Brazil
26
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
V Workshop em Música Ubíqua (V UbiMus), Vitória, ES, 29 de outubro a 3 de novembro de 2014
Figura 9. Perfil do produto obtido com Audacity.
Em contraste, os descritores relaxante e agradável receberam escores
negativos nas duas mixagens feitas com mixDroid 1G e 2G. Por outra parte, a
variabilidade nas respostas para esses dois fatores foi maior em todos os casos
exceto no descritor agradável para a mixagem feita com mixDroid 1G.
Figura 10. Perfil do produto obtido com mixDroid 1G.
É interessante observar que o fator relevância teve o melhor resultado para
o produto feito com mixDroid 2G, seguido pelo escore dado à mixagem feita com
mixDroid 1G. No entanto, essa tendência não foi acompanhada pelo escore dado ao
fator originalidade. Os resultados para os produtos feitos com mixDroid 1G e 2G
foram menores para os fatores originalidade e expressividade.
Observando os dados em conjunto, as mixagens feitas com mixDroid 1G e
2G apresentam praticamente o mesmo perfil. Os fatores seguem a mesma ordem de
maior a menor: bom, original, expressivo, agradável, relaxante. Essa tendência
contrasta com a variação pequena entre as médias do perfil do produto feito com
Audacity.
Figura 11. Perfil do produto obtido com mixDroid 2G.
5. Discussão dos resultados e considerações finais
Os resultados obtidos confirmam parte das hipóteses formuladas a partir de observações
em estudos preliminares. (1) A aferição de produtos criativos fornece resultados
27
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
V Workshop em Música Ubíqua (V UbiMus), Vitória, ES, 29 de outubro a 3 de novembro de 2014
consistentes para as duas ferramentas de suporte a atividades síncronas e gera um perfil
contrastante para a ferramenta de suporte a atividades assíncronas. Esse resultado indica
que a metodologia proposta é viável. (2) Apesar do alto investimento temporal exigido
pela ferramenta de mixagem assíncrona, a diferença no perfil de criatividade dos
produtos obtidos com as ferramentas síncronas não indicou quedas generalizadas nos
escores. Surpreendentemente, os resultados no fator relevância foram inversamente
proporcionais ao tempo investido na mixagem. Porém essa tendência não foi
acompanhada pelo escores dados à originalidade. Portanto podemos concluir que o
suporte síncrono favorece a relevância do produto mas não tem o mesmo impacto na
originalidade. (3) Se todos os fatores de criatividade tivessem mostrado aumentos para
os produtos feitos com ferramentas síncronas, poderíamos concluir que esse tipo de
suporte não só é mais eficiente mas ele também fomenta a criatividade. Os resultados
não são uniformes para todos os fatores. Houve aumento na relevância dos produtos, e
queda nos descritores agradável e relaxante. Já as diferenças nos escores de
originalidade e expressividade foram relativamente pequenas.
Entre as limitações do estudo, apontamos a possibilidade de que a ordem de
aplicação do CrePP-NAP tenha tido impacto nas aferições. Em experimentos futuros
estabeleceremos uma matriz de aplicação que elimine o possível efeito da ordem de
apresentação das minicomps. Outra limitação é mudança entre o ambiente de realização
da atividade criativa e o ambiente de aferição do produto criativo. Qualificando os
resultados como preliminares, podemos afirmar que o suporte síncrono fomenta a
geração de produtos com perfil diferente do suporte assíncrono porém não
necessariamente mais ou menos criativo.
Portanto concluímos que a metáfora de marcação temporal aumenta
significativamente a eficiência da atividade criativa e tem impacto no perfil dos
produtos criativos. Esse perfil é diferente dos produtos gerados a partir do suporte
assíncrono porém não é necessariamente mais ou menos criativo. A utilização do
método de aferição proposto neste trabalho permite a comparação do impacto de
diversas estratégias de suporte, ampliando o leque de técnicas disponíveis para o design
de interação centrado em criatividade.
Referências
FreeSound, Disponível em: <http://www.freesound.org/browse/>. Acesso em: 26 de
agosto de 2014 às 20:30 hs.
Keller, D., Barreiro, D. L., Queiroz, M. & Pimenta, M. S. (2010). Anchoring in
ubiquitous musical activities. In Proceedings of the International Computer Music
Conference (pp. 319-326). Ann Arbor, MI: MPublishing, University of Michigan
Library.
Keller, D., Ferreira da Silva, E., Pinheiro da Silva, F., Lima, M. H., Pimenta, M. S. &
Lazzarini, V. (2013). Everyday musical creativity: An exploratory study with vocal
percussion (Criatividade musicalcCotidiana: Um estudo exploratório com sons
vocais percussivos). In Anais do Congresso da Associação Nacional de Pesquisa e
Pós-Graduação em Música - ANPPOM. Natal, RN: ANPPOM.
Vitória - ES - Brazil
28
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
V Workshop em Música Ubíqua (V UbiMus), Vitória, ES, 29 de outubro a 3 de novembro de 2014
Keller, D., Flores, L. V., Pimenta, M. S., Capasso, A. & Tinajero, P. (2011). Convergent
Trends Toward Ubiquitous Music. Journal of New Music Research 40 (3), 265-276.
(Doi: 10.1080/09298215.2011.594514.)
Keller, D., Lima, M. H., Pimenta, M. S. & Queiroz, M. (2011). Assessing musical
creativity: material, procedural and contextual dimensions. In Anais do Congresso da
Associação Nacional de Pesquisa e Pós-Graduação em Música - ANPPOM (pp.
708-714). Uberlândia, MG: ANPPOM.
Keller, D., Pimenta, M. S. & Lazzarini, V. (2013). Os Ingredientes da Criatividade em
Música Ubíqua. In D. Keller, D. Quaranta & R. Sigal (eds.), Sonic Ideas, Vol.
Criatividade Musical / Creatividad Musical. México, DF: CMMAS.
Keller, D., Pinheiro da Silva, F., Ferreira da Silva, E., Lazzarini, V. & Pimenta, M. S.
(2013). Design oportunista de sistemas musicais ubíquos: O impacto do fator de
ancoragem no suporte à criatividade. In E. Ferneda, G. Cabral & D. Keller (eds.),
Proceedings of the XIV Brazilian Symposium on Computer Music (SBCM 2013).
Brasília, DF: SBC.
Keller, D., Pinheiro da Silva, F., Giorni, B., Pimenta, M. S. & Queiroz, M. (2011).
Marcação espacial: estudo exploratório. In Proceedings of the 13th Brazilian
Symposium on Computer Music. Vitória, ES: SBC.
Mazzoni, D. & Dannenberg, R. (2000). Audacity [Editor de Áudio]. Pittsburgh, PA:
Carnegie Mellon University. http://audacity.sourceforge.net/about/credits.
Pimenta, M. S., Flores, L. V., Radanovitsck, E. A. A., Keller, D. & Lazzarini, V.
(2013). Aplicando a Metáfora de Marcação Temporal para Atividades Criativas com
mixDroid. In D. Keller, D. Quaranta & R. Sigal (eds.), Sonic Ideas, Vol. Criatividade
Musical / Creatividad Musical. México, DF: CMMAS.
Pimenta, M. S., Miletto, E. M., Keller, D. & Flores, L. V. (2012). Technological support
for online communities focusing on music creation: Adopting collaboration,
flexibility and multiculturality from Brazilian creativity styles. In N. A. Azab (ed.),
Cases on Web 2.0 in Developing Countries: Studies on Implementation, Application
and Use. Vancouver, BC: IGI Global Press. (ISBN: 1466625155.)
Pinheiro da Silva, F., Keller, D., Ferreira da Silva, E., Pimenta, M. S. & Lazzarini, V.
(2013). Criatividade musical cotidiana: estudo exploratório de atividades musicais
ubíquas. Música Hodie 13, 64-79.
Pinheiro da Silva, F., Pimenta, M. S., Lazzarini, V. & Keller, D. (2012). A marcação
temporal no seu nicho: Engajamento, explorabilidade e atenção criativa. In
Proceedings of the III Ubiquitous Music Workshop (III UbiMus). São Paulo, SP:
Ubiquitous Music Group.
Radanovitsck, E. A. A., Keller, D., Flores, L. V., Pimenta, M. S. & Queiroz, M. (2011).
mixDroid: Marcação temporal para atividades criativas. In Proceedings of the XIII
29
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
Making meaningful musical experiences accessible using
the iPad
Andrew R. Brown1, Donald Stewart2, Amber Hansen1, Alanna Stewart2
1
Queensland Conservatorium, Griffith University, Brisbane, Austrália
2
School of Medicine, Griffith University, Brisbane, Australia
{andrew.r.brown, donald.stewart, a.hansen, alanna.stewart}
@griffith.edu.au
Abstract. In this paper we report on our experiences using ubiquitous
computing devices to introduce music-based creative activities into an
Australian school. The use of music applications on mobile tablet computers
(iPads) made these activities accessible to students with a limited range of
prior musical background and in a general purpose classroom setting. The
activities were designed to be meaningful and contribute toward personal
resilience in the students. We describe the approach to meeting these
objectives and discuss results of the project. The paper includes an overview
of the ongoing project including its aims, objectives and utilisation of mobile
technologies and software with generative and networkable capabilities. Two
theoretical frameworks inform the research design; the meaningful
engagement matrix and personal resilience. We describe these frameworks
and how they inform the activity planning. We report on the activities
undertaken to date and share results from questionnaires, interviews, musical
outcomes, and observation.
1. Introduction
This project builds on the authors’ previous work with network music jamming systems
(Brown and Dillon 2007) and youth resilience (Stewart et al. 2004, Stewart 2014).
These research threads have come together in this project. Taking advantage of the
ubiquitous nature of mobile computing devices (in particular of Apple’s iPad), the
project aims to provide school students who have no particular background in music,
with access to the creative and well-being benefits of collaborative and personally
expressive music making. This project takes a step forward from our previous network
jamming research by using Apple’s GarageBand software on the iPad rather than the
our own jam2jam software on laptop and desktop computers. jam2jam was specifically
written for our previous research on how technologies afford meaningful engagement
with music. It was used in this capacity between 2002 and 2012. The main software
features of jam2jam that support accessibility and engagement are 1) the use of
generative music processes to enable participation by inexperienced musicians, 2) the
ability for systems to be synchronized over a network facilitating coordination amongst
users, either locally or at a distance, and 3) the ability to record music making activities
Vitória - ES - Brazil
30
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
and export these for sharing. These features are now present in GarageBand for iPad
(and an increasing number of other commercial software and hardware combinations).
In our previous examination of developing resilience in school contexts, positive
contributing factors included students developing a sense of autonomy and feelings of
connectedness with peers and adults. We suggest that the scaffolding effect of
generative music process can assist in promoting a sense of creative autonomy in
inexperienced musicians and that the collaborative aspects of group music making can
strengthen feelings of connectedness amongst peers. An aim of this project is to show
how the principles of education and health-promotion developed in our previous
research can transfer to the use of ubiquitous computing systems.
1.1 Brief description of the project
This project focuses on building and supporting young people’s engagement and
connectedness with their creative selves and to help build resilience through musical
collaboration and success. Working with a school of Indigenous Australian students (the
Murri school) based in Brisbane, Australia we have provided opportunities for musical
expression using music technology through the school curriculum.
The project engages Indigenous Australian students using a digital audio
production system that allows them to use their personal, social and cultural identities to
make meaningful creative endeavors. The project trials newly emerging technology
using iPads with GarageBand software to explore the development of self-confidence
and self-esteem.
The approach involves the trialing of weekly music-based activities is several
classes over two terms (20 weeks). The activities are designed to offer opportunities for
students to achieve creative educational goals, to engage them in expressive musicmaking, to develop self-esteem and to develop creative collaborations with peers.
The project aims to provide evidence of a positive model for engaging school
students in an interactive music-based education program and for building confidence
and resilience. Objectives of the project include, to:
• Trial and evaluate new generative music technology to explore improvements in
engagement and connectedness between students and the education system.
• Build resilience and raise the levels of educational achievement and aspirations
of Indigenous students.
• Identify positive models of music education and health promotion.
• Use music technologies to build a sense of belonging and connectedness within
the school environment that is protective of mental and emotional wellbeing.
2. Accessibility via mobile technologies
A catalyst for this project is the availability of appropriate computing software and
hardware for music making. Apple’s iPad and GarageBand software have features that
make the activities of this project much more accessible than they have previously been.
The iPad’s small size and battery life make it easy for students to handle and easy for
schools to accommodate. The GarageBand software utilizes ‘smart instruments’ and
‘Apple loops’ that simplify music production. The smart instruments provide a
constrained performance environment that minimizes ‘mistakes’ and can be used in
31
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
music education in a similar way that restricted acoustic instruments (such as small
xylophones) have been in the past. The music clips (Apple loops) allow for a
constructor-set approach to music making where students can combine these building
blocks without needing (yet) the facility to make the clips from scratch. The iPads and
GarageBand combination support collaboration by allowing students, each with an iPad,
to synchronize their music making over a local network. This activity, which we have
previously called network jamming, facilitates groups of students to perform together.
Finally, the ability of the software to record the music they compose and export files for
later review and distribution, means that student’s work can be available for reflection
and/or sharing with the wider community.
3. Meaningful Engagement
The theory of meaningful engagement was developed by Andrew R. Brown and Steve
Dillon (2012) and has underscored the development of network jamming research more
broadly. It involves two dimensions. Musical engagement includes various creative
behaviors, or ways of being involved in music. The modes of engagement outlined in
the theory cover a range of interactions from listening and appreciating, to creating,
performing and leading. The theory suggests that meaning can arise from engagements
with music in three contexts; personal, social and cultural. That is, music can be
personally satisfying, it can lead to positive social relationships, and it can provide a
sense of cultural or community identity. Below is a summary of the modes of
engagement and context for meaning.
Modes of Creative Engagement
• Appreciating – paying careful attention to creative works and their
representations
• Evaluating – judging aesthetic value and cultural appropriateness
• Directing – leading creative making activities
• Exploring – searching through artistic possibilities
• Embodying – being engrossed in fluent creative expression
Contexts of Creative Meaning
• Personal – intrinsically enjoying the activity
• Social – developing relationships with others
• Cultural – feeling that actions are valued by the community
The two aspects of meaningful engagement can be depicted as the axes of a matrix, as
shown in figure 1.
Vitória - ES - Brazil
32
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Figure 1. The Meaningful Engagement Matrix with exemplar musical activities
The meaningful engagement matrix (MEM) is a framework for describing creative
experiences and evaluating creative resources, plans or activities. This can be, for
example, assessing a community or educational workshop, reviewing the
comprehensiveness of an arts curriculum or lesson plan, evaluating the affordances of a
software application for creating media content. While this matrix was developed for
musical activities it can be applied to other pursuits, especially in the Arts.
Artistic experiences become meaningful when they resonate with us and are
satisfying. The meaningful engagement matrix has been developed to assist inquiry into
our creative activities and relationships. A full creative life, the theory suggests,
involves experiences across all cells of the matrix. Therefore, this framework can be
useful when auditing the range of experiences afforded by any particular activity,
program or resource, or across a set/series of these. It is in the assessment of the wholeof-program view of this project that the MEM provides its greatest utility.
4. Resilience
Resiliency refers to the capacities within a person that promote positive outcomes such
as mental health and well-being, and provide protection from factors that might
otherwise place that person at increased development, social and/or health risk (Rowe &
Stewart, 2009; Fraser, 1997). Factors that contribute to resilience include personal
coping skills and strategies for dealing with adversity such as problem-solving,
cognitive and emotional skills, communication skills and help-seeking behaviors
(Fraser, 1997). This project builds on previous work that indicates that creative
activities can improve resilience.
There is an abundance of research that highlights the importance of the social
environment, or social relationships for fostering resilience (Maggi et al., 2005; Rowe &
Stewart, 2009; Lee & Stewart 2013). Social cohesion or connectedness refers to broader
features of communities and populations and is characterized by strong social bonds
with high levels of interpersonal trust and norms of reciprocity, otherwise known as
social capital (Siddiqui, et al., 2007). This network of rich social relationships and
strong connections promote a sense of belonging and community connectedness which,
in turn, impacts on an individual’s mental health and overall well-being (AIHW, 2009).
Social capital, spirituality, family support and a strong sense of cultural identity are key
protective factors for Indigenous people (and children) (Malin, 2003).
33
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
Schools that aim to strengthen their capacity as healthy settings for living,
learning, working and playing, and are underpinned by inclusive participatory
approaches to decision-making and action, can help to build resilience (Rowe &
Stewart, 2009). Connectedness in the school setting has been shown to be a protective
factor of adolescent health risk behaviors related to emotional health, violence,
substance use and sexuality. Creative activities, especially collaborative ones such as
music making, share many of the characteristics that have been shown to promote
resilience. This project seeks to take advantage of these connections.
5. Collaboration and Sustainability
With relevant support from the Murri school community, the project offered the
opportunity to develop a creative and sustainable program for young people, in this case
young Indigenous Australians, to engage in collaborative music making activities using
interactive music technologies. The reason that music technology is appropriate for the
project was because of its familiarity to young people and also because of our expertise
in the use of generative systems in collaborative music making.
A number of creative projects use music jamming as a means of improving
creativity, social justice and wellbeing, hence there are many collaborations with
communities that are sometimes marginalized from mainstream society (Adkins et al.
2012). The GarageBand software for the iPad supports collaborative audio production
through local synchronization via Bluetooth and through file and audio material export
and import. When used as a musical instrument and compositional platform this
software enables students to build on basic skills of exploration and improvisation and
encourages engagement. These technologies are also easy for staff to learn and use and
this, it is hoped will increase the likelihood that the network jamming activities will
continue in the school beyond the life of this project. A number of strategies were used
to facilitate the sustainability of the activities. These include:
• Involvement of school administration and teaching staff in the planning and
execution of the activities.
• Integration of the music activities into the broader curriculum.
• Sharing of the musical outcomes amongst the school community.
• Regular reporting on progress with the school administration.
• Provision to leave the equipment used for the project with the school.
6. Case Study - iPads and Music at the Murri School
The goal of the project was to examine how music technology can work to improve
Indigenous health and wellbeing by creating a sustainable program for Indigenous youth
to engage in collaborative music making activities using interactive music technology.
Vitória - ES - Brazil
34
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Figure 2. Images from the project school
The project integrated music activities using the iPad into the normal school
curriculum and involved relevant teachers. It used standard classroom procedures and
resources but the project provided a facilitator proficient in the technologies and
familiar with theories and objectives of the project. The project involved a weekly
session with each class facilitated by a member of the project team and the class teacher.
Prior to commencing, approval was gained from Griffith University’s Human
Research Ethics Committee to conduct the research and teachers and students were
provided with information about the project and teachers were consulted about how the
music-based activities might integrate with existing curriculum objectives. Many
teachers chose to incorporate creative writing tasks as the basis for song writing and
rapping. The project used a whole-school approach and classes were chosen from across
the full age range of the school for participation. Students and teachers were not
screened for musical background nor on any measure of resilience as we were keen to
investigate the versatility and flexibility of this approach across the school community.
After consultation with staff, three grade levels were selected to participate in
the project. The year levels and project summaries for these classes are summarized in
the following table.
Year Level
Approx. Age Activity Objective
2/3
7/8
Students to write and record a short 4-line rap about the good qualities
they see in themselves
4
9
Students to record a creative interpretation of their sonic personal profiles
utilizing sounds and music to express their personalities.
8
13
Students to write and record a sonic poem using text and music describing
themselves and their hopes, expectations and dreams.
Table 1. Participating Groups and Activities
6.1 Designing music-based activities
Prior to facilitating the intervention with the students at the Murri school, a series of
generic activities were designed in order to facilitate creative participation in a way that
35
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
adheres to the philosophy interwoven in the aforementioned MEM framework. The key
objectives of the music-based activities designed for this project were to: 1) enable the
students to engage in diverse music making opportunities that utilize music technology
in a meaningful capacity; 2) to enable participants the opportunity to engage in creative
experiences that assist in positively strengthening their sense of well being and
resilience.
The activities designed for each year level were collaboratively developed by the
researchers and participating class teachers, keeping the MEM in mind throughout this
process. Each teacher chose to utilize an age/ability appropriate literacy basis for their
class project in order to facilitate the opportunity for students to individually and
collectively express themselves and their interests in a personal and creative manner.
The objective for Term 1 was to enable students of each participating group to
develop and record their own composition using GarageBand on the iPads. The timeline
below outlines the context of each weekly session dedicated to the project, allowing for
students of each group to spend time experimenting, jamming, practicing playing and
recording instruments and external audio, and for recording the final product. The
objective for Term 2 was for students to develop and refine their work into a form ready
for a ‘signature’ event—a public performance at the school assembly.
Table 2 lists the mode and context of activities designed to achieve the key
objectives of this project. Each cell corresponds to specific mode and context
combination within the MEM.
ATTEND
Listening /
Observing
PERSONAL
(Of the self)
EVALUATE
Reflecting /
Analyzing
EXPLORE
Experimenting /
Improvising
Conscious
DIRECT
Decision
Making /
Instructing
EMBODY
Playing /
Performing
Establishing
habits
Objective Independently
listen, read and
observe in order
to become aware
of relevant
knowledge
Independently
reflect and
analyze personal
practice as a
means of
facilitating
continued
learning.
Independently
explore and
experiment with
relevant artefacts
and processes.
Engage in
technical
activities that
lead to
creating a
musical
artefact.
Independent
practice /
playing
Activity
Record/journal
learning and
practical
experiences.
Music analysis
to enable the
development of
aural skills.
Independently
explore and
experiment with
sounds and
functions of
Network
Jamming
devices. Building
knowledge.
Setting up a
Jam session.
Composing a
song.
Guided and
Independent
Play / Practice
of Network
Jamming
devises and
processes to
building skills.
Reflect upon
learning and
practical
experiences
with peers as
part of group
discussions.
Extend learning
through
collaborative
experimentation.
Take on a
Rehearse and
leadership role record with a
within a group group.
activity.
Introduction to
Network
Jamming
Demonstration
of available
interactive
music
hardware and
software.
SOCIAL
Objective Share work and
(Collaborative)
progress with
peers.
Vitória - ES - Brazil
36
From Digital Arts to Ubiquitous Music
Activity
CULTURAL
(Connection
with external)
V Ubimus - 2014
Workshop
Group
presentations of Discussion.
individuals and
collaborative
engagement and
progress with
Network
Jamming.
Engage in a group Lead and
(Networked) Jam Conduct a
session.
Jam Session.
Time to
play/practice
with Network
Jamming
devices and
Group
processes
Composition. collaboratively.
Objective Observe relevant
activity as
performed in a
public context.
Extend and
connect
reflective
practice to
include a wider
cultural
participation
and dialogue.
Examine /
research relevant
practice in a
wider cultural
context.
Support and
promote a
musical
artefact for
public
distribution.
Participate in a
group public
performance.
Activity
Develop a
creative
project for
public
presentation.
Create a
Blog /website as
a reference for
music work.
Investigating
Network
Jamming
in diverse
cultural
contexts.
Explore other
commercial
music apps.
Create and
Promote a
CD/DVD
showcasing
creative
progress.
Perform a group
‘Jam’ or
composition to
an audience.
Attending /
Observing a
performance that
utilizes Network
Jamming as a
key composition
/ performance
process.
Table 2. Music-based activities across the Meaningful Engagement Matrix.
6.2 Measuring resilience and engagement
Evaluation of this project relied on a mixed methods research design combining
quantitative and qualitative methods of data collection, analysis and inference in order
to investigate both the processes developed through the life of the project as well as the
impact of the project over time.
Students were asked to complete a modified version of a pre-existing resilience
questionnaire that has high levels of reliability and validity (e.g., Healthy Kids Survey California Dept of Education, 2004). Key informant interviews with staff were
conducted and subject to an ongoing thematic analysis. An introductory school
consultation session was attended by 9 staff members at the outset of the project. All
were supportive and identified ways that they could integrate the project into their
curriculum. Due to timetabling constraints only three of these staff and their classes are
participating in the project.
Thirty four students participated in the project across three grade levels: Years
2/3 (14 students); Year 4 (12 students); and Year 8 (8 students in the English stream).
Activities included developing a Rap, recording a personal sonic profile and writing and
recording a bio-poem. Observations of class sessions were recorded in a journal by a
member of the research team. In addition, files of work completed on the iPad were
regularly saved allowing for analysis of the steps taken during the creative process.
37
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
7. Survey results summary
The first stage of data collection provided a baseline and descriptive statistics show
some differences emerging between the younger students in Grade 2/3 and 4 and their
fellow students in Grade 8. We have not completed tests of statistical significance as the
sample is small. We provide, below, a selection of the results and findings. This
summary begins with some data from the first resilience survey to give a sense of the
student’s attitudes and expectations from the project.
Over 75% of the total student sample thought that being involved in the project
would be fun and most (younger students) were excited at the prospect. The creative
levels and aspirations of the students were uniformly high and almost all indicated that
they enjoyed going to music performances. However, compared to the grade 2/3 and 4
students who relished the creative opportunities of the project, a substantially lower
percentage of the Grade 8 sample felt confident and supportive of the activity and their
creative role.
With regard to their confidence with and support structure for creative activities:
•
Over 85% of all students like making things that are creative and different.
•
Students felt variously confident with their own creative ability and ideas. (71%
of Grade 2/3, over 90% of Grade 4 students, Grade 8 = 63%).
•
Most students have family/elders that they can go to for help (Grade 2/3=79%,
Grade 4 = 90%, Grade 8 = 75%)
The students’ attitudes toward peer collaboration varied between the younger and older
students. The following data reflect these attitudes to working with classmates:
•
Students like to share their creative ideas with their classmates (Grade 2/3 =
78%, Grade 4 = 90%, Grade 8 = 37%).
•
Students enjoy hearing about their classmates’ creative ideas (Grade 2/3 82%,
Grade 4 = %85, Grade 8 = 63%).
•
Students thought that being a part of the project would help them have more
friends (Grade 2/3 = 75%, Grade 4 = 75%, Grade 8 = 12%).
As with attitudes to collaboration, the students’ sense of self-confidence in public music
making also reduced with age. In relation to producing a performance or recorded
outcome:
•
Students thought that they could put together a performance or recording that
would be enjoyed by others (Grade 2/3 = 86%, Grade 4 = 66%, Grade 8 = 12%).
•
Students felt that people would come to watch their performance or record
launch (Grade 2/3 = 90%, Grade 4 = 75%, Grade 8 = 25%).
A clear trend in this data is the difference in reported self-confidence, in music at least,
between the younger (7-9 year old) and older (13 year old) students. This is consistent
with much more extensive research that shows a dip is self-confidence in adolescents
(Orenstein 1994). As a result of this, and supported by informal feedback from the grade
8 teacher, we adopted a different strategy for the older group. Activities for this class
focused more on personal meaning than on social or cultural meaning, and we tried to
Vitória - ES - Brazil
38
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
minimize potentially embarrassing public presentations of the music. As well, work for
older students has a greater individual focus whereas activities for younger students are
heavily biased toward group work and include class and public presentation of
outcomes in the form of recorded media and live performance. What is interesting to
note, is that the accessibility features of the music technologies employed are equally
applicable for both groups and approaches.
A comparison of results from participants in both baseline and follow up surveys
shows that in relation to project participation 75% of Grade 4; 72.7% of Grade 2 and
66.7% of Grade 8 thought that being involved in the project was fun all, or most of the
time. At the same time, however, participating in the project was also considered
stressful by some at least, with 27.3% of Grade 2/3 feeling worried about taking part in
the project all or most of the time (Grade 4=75%; Grade 8=33.3%).
8. Qualitative results summary
Qualitative data collected included interviews conducted with teachers and notes
maintained by research team members.
8.1 Pre-intervention results
Staff members recorded their initial plans for implementing the project within their
classrooms for Term 1 and Term 2, 2013. Eight out of the nine staff members
participated in this component of the staff session. Participant responses to what they
hoped to achieve by being involved in the project include:
• For the students to record stories created for English unit. The story can be
edited and compiled onto a CD. Hopefully children will gain confidence in
speaking and sharing their stories/ideas.
• I would like to see students engage with iPad technology to enhance and extend
learning already happening in subjects.
• Improve teacher and student confidence and participation with technology;
having children work together cooperatively; tap into children’s different
learning styles i.e. rap songs to learn spellings; student enjoyment.
• To use the jamming as a learning/teaching tool in classroom – to integrate
curriculum to make learning fun.
• To learn myself and get children involved in expressing themselves orally and
musically.
• To record for a performance, to make learning fun and for students to use an
iPad.
• Enhancement of student work (oral and written) – familiarity with technology.
• Increase iPad literacy, learn with students how to use this tool for work.
The research team utilized the Meaningful Engagement Matrix to record the frequency
and intensity of meaningful engagements they observe in students participating in the
project. Video footage and photography were also being used to provide further
documentation of project implementation activities, and for review and analysis.
39
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
8.2 Post-intervention results
Classroom management
In terms of general process, the participating classroom teachers had differing opinions
regarding how manageable it is to have a class of students work with the iPads for
engaging in learning and collaborative work. Two of the teachers felt that this was a
manageable task, whereas one of the teachers (Grade 4) felt that this process of learning
would work best in smaller groups as children may have difficulty listening to
instructions and paying attention in a larger group. Some of the challenges in
participating in the project include student’s inability to share iPads–they preferred to
work on their own. Another challenge lay in having a consistent and clear idea of the
long-term goal and clarifying goals for students to be achieved at the end of each
session.
One teacher felt that at times it seemed that the students were ‘all over the
place’. This was due to the students showcasing their ability to ‘jam’ on the iPads using
different musical instruments available on GarageBand. Jamming with colleagues
allows for creative expression that relies on self-expression. The grade 4 teacher felt that
not being present regularly, and not understanding how to use the iPads and
remembering it were challenges. Also, keeping all the students on task when the whole
class was involved was a challenge. She felt that keeping the iPad project in a small
group environment might assist in overcoming some of these challenges.
However, in terms of how satisfied the teachers were with the way the project
had been implemented in their class, there was general consensus that they felt that the
project went well and that the students looked forward to the sessions on the iPad.
Student engagement
The Grade 2 teacher was really impressed that his more challenging students, who
rarely engaged in classroom activities, were able to participate confidently in the
project. Those that had difficulty with directing their attention to one specific task for a
period of time were able to participate in the iPad sessions for the course of the weekly
schedule.
The Grade 8 teacher felt that he under-estimated the students’ reluctance to share
their work. He felt that his lack of knowledge of technology/iPads required increased
reliance on the research assistants. He acknowledged that the students had a product at
the end of the project, but considered that the iPads could have been better used.
Teacher engagement
The Grade 4 teacher felt that there were components of the program that she liked and
some parts of the program she did not find helpful in making the project run smoothly.
Teacher ownership is a critical success factor for the sustainability of the project. She
felt that because she wasn’t there most of the time for the weekly iPad sessions, she
found it difficult to gauge how effective the implementation was going. She indicated
that there were times when it was confusing what the object of the lesson was. This
Vitória - ES - Brazil
40
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
reinforced the importance of working with the teachers to develop an action plan for
their students and take a leadership role in achieving their goals and objectives.
The participating teachers relied on the research assistants to set weekly plans
for the students, offering limited guidance and support. Behaviour management was a
challenge each week for the research assistants. Often teacher aides were the only other
adults present to provide additional supervision for the children and at times sessions
were taken up with disciplining students.
The ‘signature’ event
The grade 4 teacher stated that he enjoyed watching those children who performed on
assembly and that the end of project performance sounded good. He stated that some of
the students are normally really shy and would never get up on their own. But, because
they were in a group and focusing on the iPad they coped.
All teachers stated that they were happy with what their class had achieved by
participating in the project. The Grade 8 teacher stated that hopefully they will have
greater confidence to use technology in relation to English.
9. Findings
All teachers considered that their involvement in the project has made a difference to
the way they have looked at teaching. The grade 4 teacher stated that it gave her another
avenue through which to teach. Technology is the focus of our learning now, she said.
The grade 8 teacher stated that it has highlighted a need to use technology in the class.
Students have access to it outside of school they use it all the time–it is a tool he feels he
needs to tap into for learning. All participating teachers have plans to continue to use
this form of learning for future teaching.
The teachers felt it is beneficial to have a structure around using the iPads. To
start with structure was thought to be important i.e., weekly plan/within a subject such
as English. The grade 8 teacher felt that freedom to be creative can flow on from this.
All teachers felt that the project had had a positive impact on the students. The
Grade 8 teacher stated that the students looked forward to ‘Friday’ sessions. He stated
that although they were shy, he believes that they were secretly proud of what they did.
The Grade 4 teacher said that they loved it and looked forward to it. She also said that
she could use the iPads as a reward for good behavior.
All teachers stated that they would recommend using the iPads as an approach to
learning to other teachers. The Grade 2 and 8 teachers felt confident in sharing this
approach to learning with colleagues. All teachers felt that they would have liked more
professional development on using the iPads.
9.1. Lessons learnt
This project aimed to examine how music technology can work to improve student
health and wellbeing. The project aimed to offer the opportunity to develop a creative
and sustainable program for young Indigenous Australians to engage in collaborative
music making activities using interactive music technologies. The following lessons
have been learned from this pilot project:
41
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
•
An in-class project of this nature requires relevant support from the whole Murri
school (Indigenous) community.
•
A planned period of in-service training and support with the teachers would help
to ensure that the project is introduced with confidence and becomes sustainable
beyond the life of the project.
•
Small group work with all students accessing the technology would ensure better
student engagement.
•
A clear link between curriculum frameworks and the use of iPad technology
would help to engage the project within the School’s learning framework.
•
Indigenous students enjoy and engage with advanced technology within the
classroom and develop meaningful, creative compositions.
•
The Meaningful Engagement Matrix provides a strong theoretical framework for
a School-based creative project.
•
Additional research is needed to confirm the reliability and validity of the
questionnaire with consideration given to a range of instrument structures to
allow for widely varying age/developmental conditions.
•
This project provides a constructive and stimulating experience for many young
people who find group work difficult and have communication difficulties.
•
Public performance of creative, music-based projects provide important
opportunities to enhance self-esteem and promote creative partnerships.
10. Conclusion
In this paper we have described our use of mobile technologies and software to make
music-based activities accessible to young people in a way that promotes meaningful
engagement and resilience. The project is based in the Murri school in Brisbane,
Australia that is dedicated to the education of Indigenous Australians. The project
involved weekly activities with three classes from that school over 20 weeks with
students ranging from ages 7-13.
The design of project activities was informed by theories of meaningful
engagement and resilience, but were guided by the advice of class teachers and student
survey responses to ensure appropriateness to the local context.
Data indicate that staff and students are enthusiastic about using the tablet
computers and music apps, and that their ease of use is making previously unimagined
music production activities accessible. Consistent with other studies, our data show a
dip in the creative self confidence of students in their early teens (compared to younger
students). This has been accommodated for by shifting the emphasis for those students
toward individual and personal expression and away from collaborative and public
activities.
The portability of the iPad hardware has assisted with the integration of the
devices into the school environment, and their multi-purpose nature makes for fluid
shifts between music and other curricular tasks (such as creative writing). The
GarageBand software has facilitated rich music production outcomes, although the
Vitória - ES - Brazil
42
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
devices alone provided limited audio recording and playback quality. We plan to
address this in the next stage of the project through more extensive use of external
microphones and headphones.
Indications are that the students can be keenly engaged in network jamming
activities but require an ongoing facilitator support to maximize creative outcomes. The
features of the music-based activities with ubiquitous technologies align well with
characteristics that promote resilience, including personal autonomy and connectedness
with peers and adults, and we remain optimistic that evidence of a positive effect on
student resilience from the project can be achieved.
References
Adkins, B., Bartleet, B.-L., Brown, A. R., Foster, A., Hirche, K., Procopis, B.,
Ruthmann, A., & Sunderland, N. (2012) “Music as a tool for social transformation:
A dedication to the life and work of Steve Dillon (20 March 1953 - 1 April 2012)”.
International Journal of Community Music, 5(2), 189–205.
AIHW (Australian Institute of Health and Welfare). (2009) A Picture of Australia’s
Children: Health and Wellbeing of Indigenous Children. Canberra: AIHW.
http://www.aihw.gov.au/
Brown, A. R., & Dillon, S. (2007) “Networked Improvisational Musical Environments:
Learning through online collaborative music making”. In: J. Finney & P. Burnard
(Eds.), Music Education with Digital Technology, pp. 96–106. London: Continuum.
Brown, A. R., & Dillon, S. (2012) “Meaningful Engagement with music composition”.
In: D. Collins (Ed.), The Act of Musical Composition: Studies in the creative process,
pp. 79–110. Surrey, UK: Ashgate.
Fraser, M.W. (1997) Risk and Resilience in Childhood. USA: NASW Press.
Lee, P. C, Stewart, D., (2013) “Does a socio-ecological school model promote resilience
in primary schools?” Journal of School Health. 83: 795-804.
Maggi, S., Irwin, L. G., Siddiqi, A., Poureslami, I., Hertzman, E., & Hertzman, C.
(2005) Knowledge Network for Early Child Development. British Columbia: World
Health Organisation.
Malin, M. (2003) Is Schooling Good for Aboriginal Children’s Health? Northern
Territory University: The Cooperative Research Centre for Aboriginal and Tropical
Health.
Orenstein, P. (1994) Schoolgirls: Young women, self esteem, and the confidence gap.
Anchor Press.
Rowe, F., & Stewart, D. (2009) “Promoting Connectedness through Whole-School
Approaches: A Qualitative Study”. Health Education, 109: 5, 396 - 413
Siddiqi, A., Irwin, L. G., & Hertzman, C. (2007) “Total Environment Assessment
Model for Early Child Development”.
www.who.int/social_determinants/.../ecd_kn_evidence_report_2007.pdf. Retrieved
20.05.10
43
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
Stewart, D., Sun, J., Patterson, C., Lemerle, K., & Hardie, M. (2004) “Promoting and
building resilience in primary school communities: evidence from a comprehensive
‘health promoting school’ approach”. International Journal of Mental Health
Promotion, 6(3), 26–33.
Stewart, D. (2014) “Resilience: an entry point for African health promoting schools?”
Health Education, 114: 3, 197 - 207
Vitória - ES - Brazil
44
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Progressive Disclosure
category: artistic demonstrations
Progressive Disclosure is a short piece where in an imaginary landscape an unknown
machine is progressively disclosed and explained in order to revel its inner functions.
The piece is an reflection on concepts of approach modalities and comprehension of
properties or qualities, that an object possesses and its functions. Long-slow sound
objects and impulsive sounds build up the piece. These elements are merged and
extensively overlapped in order to develop an imaginary panorama with basic
elements of a music vocabulary. Synthesized and acoustically derived sounds are
both used, but the focus here is mainly on the description of a progressively closer
observation of a visionary machine.
Dropbox link to the audio file (both HQ wav and mp3):
https://www.dropbox.com/sh/dgt7pkzaszimn0z/AADyeHU7l4yVDY-4moBAfHNia
45
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
The Beathealth Project: Synchronising Movement and
Music
Joseph Timoney1, Tomas Ward2, Rudi Villing2, Victor Lazzarini3, Eoghan
Conway2, and Dawid Czesak2
Departments of 1Computer Science, 2Electronic Engineering, and 3Music –
NUI Maynooth, Maynooth Co. Kildare, Ireland.
[email protected], {tomas.ward,rudi.villing,econway,dczesak}
@eeng.nuim.ie, [email protected]
Abstract. This paper will describe the new EU Beathealth project1: an
initiative to create an intelligent technical architecture capable of delivering
embodied, flexible, and efficient rhythmical stimulation adapted to
individuals’ motor performance and skills for the purpose of
enhancing/recovering movement activity. Additionally, it will explain how it
can exemplify the principles of Ubiquitious Music and how knowledge from
this field can suggest creativity-driven social enhancements.
1. Introduction
In recent times scientists have begun to seriously investigate how using rhythm and
music can be harnessed as a drug-free way of simulating health (Pollack, 2014). Music
works on our autonomic nervous system, thus stimulating our sensations of wellbeing at
a subconscious level (Ellis and Thayer, 2010). This has naturally led behavioural
scientists to posit that this could be a source of inspiration for a whole new set of
therapeutic tools. Innovations in mobile technology in the last 10 years offer a very
promising means by which such therapies can be delivered whenever the user or patient
is free to practice them.
The collaborative research project ‘BeatHealth’ aims to be at the forefront of
these technological developments (Beathealth, 2014). The objective of the project is to
create a new method for improving health and wellness based on rhythmic stimulation.
To achieve this requires an age-friendly, portable system that has the capability to
invigorate the user through musical playlists and then simultaneously record their
movements (i.e., during walking or running) and physiological activity via advanced
sensors. These sensors must be tailored to the individual’s motor performance and
physiological response. Additionally, as the kinematic data and stimulation parameters
are collected on the fly they are to be recorded via a dedicated e-Health service
network-based application for storage on a cloud service. This will facilitate the
visualization of information on movement performance for the individual themselves
and for sharing among family members, doctors and coaches. Such access to this
information will empower the user to become aware of her/his motor condition, whether
healthy or deficient, and encourage them to adopt a more active lifestyle to either
enhance their performance or compensate for a motor disorder they might have.
1
http://www.euromov.eu/beathealth/homepage
Vitória - ES - Brazil
46
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
An essential component to this application is the delivery of the music used to
stimulation the kinematic activity. However, it is not simply a playback mechanism, but
instead takes a significant role in the process. The belief is that by encouraging an
entrainment, or synchronization, between the music and the movement then the
maximum benefits should be obtained. This can be realized at both a coarse and fine
degree, by choosing music whose tempo is simply close to the rhythm of movement, or
even further by using audio processing techniques to dynamically adapt the beat pattern
of the music to exactly match it.
While not specifically being a music making application, the integration
between music and computing in the ‘BeatHealth’ project means that it is related to a
branch of the research field of Ubiquitous Music Systems. According to (Pimenta et al,
2004) these systems should support mobility, social interaction, device independence,
and context awareness. Certainly, from a first glance, it would seem that ‘BeatHealth’
would satisfy these criteria. Additionally, in establishing such a connection ideas from
Ubiquitous Music systems may inspire tangential developments. The remainder of the
paper aims to investigate this more fully. Firstly, some detail on the conceptual
framework behind ‘BeatHealth’ will be given, followed an outline of the technological
architecture. These will be covered in Sections 2 and 3 respectively. Section 4 will set
out the characteristics of Ubiquitous Music System, while Section 5 will discuss the
relationship between ‘BeatHealth’ and these systems. Section 6 will provide some
conclusions and future work.
2. The Theory and Science of BeatHealth
Appreciation of musical rhythms is an important feature of human culture. A key
feature of rhythm is an underlying regular beat: a perceived pulse that marks equally
spaced points in time (Cooper and Mayer, 1960), (Lerdahl and Jackendoff, 1983).
Humans are unique in their ability to couple movement to external rhythms.
Beat perception can feel automatic and the majority of the adult population can easily
achieve this (Drake, Penel, and Bigand, 2000); the ability to engage in dancing being an
obvious example. It occurs without musical training and can even be seen in young
children. Neuroimaging has confirmed activity in “motor areas” of the brain during the
production and perception of rhythm (Schubotz, Friederici, and von Cramon, 2000),
(Danielsen et al, 2014). Thus, moving to the beat of an external auditory stimulus is
sustained by a dedicated neuronal circuitry including subcortical areas, such as the basal
ganglia and the cerebellum, and cortical regions (e.g., temporal cortex, prefrontal areas,
and the Supplementary Motor Area) (Repp and Keller, 2008), (Zatorre, Chen, and
Penhune, 2007). The basal ganglia particularly show a specific response to the beat
during rhythm perception, regardless of musical training or how the beat is indicated. A
natural extension of these findings to applied research is to exploit rhythm as a way to
enhance movement performance. Rhythm, by its tendency to recruit regions of the brain
involved in motor control and timing (Zatorre, Chen, and Penhune, 2007), (Grahn and
Brett, 2007), and by fostering synchronized movement, is ideally suited for modifying
and improving movement performance (e.g., increasing movement speed or frequency
or reducing variability). It is worth noting that the basal ganglia mentioned above are
compromised in people suffering from motor disorders, for example Parkinson's
disease, and patient studies have shown that they exhibit deficits in timing tasks
(O'Boyle, Freeman, and Cody, 1996). However, rhythmic signals with a strong external
47
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
beat have been observed to ameliorate gait problems in persons with Parkinson's disease
(Nombela et al., 2014).
2.1 Entrainment and Self-Entrainment
This link between an external rhythm and the human body’s movement response is a
phenomenon known as entrainment (Clayton, Sager, and Will, 2004). This theory
describes the synchronicity of two or more independent rhythmic processes. Among its
many applications entrainment also appears as a topic in music research and is best
illustrated in its use in the study of musical meter. An element of Meter is the ‘beat’:
this is a perceived emphasis of certain events or pulses within it that are equally spaced
in time. (Trost et al., 2014). A current model under study by music psychologists is the
Dynamic Attending Theory (DAT) that focuses on the role of metrical structure as an
active listening strategy (Bolger, Trost, and Schön, 2013). Essentially, rather than
assuming that the perception of time and meter are solely determined to be within the
musical cues transmitted from performer to listener, this model proposes that rhythmic
processes endogenous to the listener entrain to cues in the musical sound. This
entrainment model appears to better reflect the cognitive processes than others (Bolger,
Trost, and Schön, 2013). It has also been suggested that the entrainment concept can be
used to study of proto-musical behavior in infants (Bolger, Trost, and Schön, 2013).
Not all entrainment involves an external stimulus, either environmental or interpersonal. 'Self-entrainment' describes the case where two or more of the body's
oscillatory systems, such as respiration and heart rhythm patterns, become synchronized
(Phillips-Silver, Aktipis, and Bryant, 2010). It is the rhythmic responsiveness to selfgenerated rhythmic signals. A simple block diagram of the process involved is shown in
Figure 1 (Phillips-Silver, Aktipis, and Bryant, 2010). In the figure the feedback from the
output to the rhythmic input of the entrainment system is the source of the selfentrainment.
Self‐Entrainment
Rhythmic input Entrainable System Rhythmic output Figure 1 Block Diagram illustrating the process of Self-Entrainment (PhillipsSilver, Aktipis, and Bryant, 2010)
It has been considered that complex-bodied humans and animals typically exhibit selfentrainment in their physical activity, that is, a gesture by one part of the body tends to
entrain gestures by other parts of the body (Clayton, Sager, and Will, 2004). For
example, arm movements in walking could, in principle, be totally independent from leg
movements, but in fact they are not. It 'feels' much easier, is more harmonious, and less
strenuous if the arms lock into the leg movements. A similar effect is reported for the
locking of step and inhalation cycles in jogging (Clayton, Sager, and Will, 2004). The
degree and kind of self-entrainment exhibited depends on the individual and the task
being carried out.
Vitória - ES - Brazil
48
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
2.2 Entrainment and Health
As mentioned above, the concept of Entrainment is readily applicable to the human
body and its response to external stimuli. Relevant medical research has considered the
behavior of endogenous physiological rhythms in humans (such as the variation of body
temperature over the 24-hour cycle), and how the study of those rhythms might be
further developed as a tool in the diagnosis of pathological states. The hope is that this
could lead to the development of new treatments. Other research investigations are
considering the field of music therapy and determining a link between entrainment and
socialization.
However, the relationship between entrainment, the stability of biological
rhythms and health is still not well understood. There are examples of where relatively
stable and entrained biological rhythms are associated with good health. A good
example is the enhanced stability of the heart rate afforded by a pacemaker. Conversely
asynchrony and instability of rhythmic processes can be associated with pathologies
(Clayton, Sager, and Will, 2004). However, entrainment does not necessarily imply
stability of biological rhythms, and stability on its own is not necessarily associated
with good health. The behavior of Brain waves is a case in point: stable brain waves
may indicate a condition such as epilepsy, while unstable waves can indicate a healthy
state (Clayton, Sager, and Will, 2004). A certain amount of flexibility and dynamic
equilibrium is more likely to be associated with health in many systems, as is a degree
of "noise", or random variation in normal physiological rhythms (Clayton, Sager, and
Will, 2004).
According to (Phillips-Silver, Aktipis, and Bryant, 2010) the capacity to exhibit
the simplest form of entrainment emerges when three critical building blocks are in
place: (1) the ability to detect rhythmic signals in the environment; (2) the ability to
produce rhythmic signals (including rhythmic signals that are byproducts of other
functions, such as locomotion or feeding behavior); and (3) the ability to integrate
sensory information and motor production that enables adjustment of motor output
based on rhythmic input. Observing these three criteria can indicate whether
entrainment is being manifested in a healthy or less healthy, or pathological, manner. If
the healthy functioning of a system requires a certain degree of entrainment, then either
a lack of entrainment, a weakening or even an excessive strengthening of entrainment
can be associated with a change to a pathological state (Phillips-Silver, Aktipis, and
Bryant, 2010).
2.3 Stimulating health through entrainment
The fundamental idea is that by stimulating an entrainment between auditory rhythmical
cues and spontaneous or deliberate movement, it boosts individual performance and
leads to enhancements in health and wellness. For healthy people, this means that they
should synchronize their movement with the beat of an external music source when
dancing or when performing physical and sport activities such as running or cycling.
This should lead to measurable improvements in their gait kinematics, for example
increased velocity and cadence (Wittwer, Webster, and Hill, 2013), and produce (i)
better coupling between breathing and running, (ii) a reduction of energy expenditure,
and (iii) a general increase in endurance and (iv) a desire to run (Hoffmann, Torregrosa,
and Bardy, 2012). Additionally, entrainment has a role in a therapeutic context where
49
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
movement is constrained by a motor disease. One study reported how it been integrated
into a rehabilitation therapy in patients with motor disorders (Wittwer, Webster, and
Hill, 2013). The idea is to use external rhythmical cues to help patients’ regularize their
gait. The patient is asked to match her/his walking speed to a regular stimulation in the
form of a repeated isochronous sound (metronome) (Nombela et al., 2013).
2.3 Related Consumer Technologies
To date only a few consumer applications and technologies that exploit rhythm for
enhancing movement have been introduced. These have been designed for healthy
people that are trying to improve their exercise regime. Yamaha released BODiBeat in
2007 (Yamaha, 2007) followed by Philips Activa (Philips, 2010) in 2010. Applications
are now appearing for mobile devices. No similar commercial products are available for
people with movement disorders.
This technology is in its infancy however. There is a lack of sophistication in
means of achieving and maintaining synchronization between the music and movement.
Furthermore, there is a need for more scientific insight into how best to capture and
analyze the relevant physiological signals and to relate them to the auditory cues. This
is the motivation for BeatHealth. Its objective is to realize an intelligent technological
architecture that can deliver flexible and efficient rhythmical stimulation that can be
adapted to any individual's skills, whether the individual is healthy or not, which will
enhance and monitor features of their movement performance. The next section will
explain the organization of the ‘Beathealth’ project.
3. Beathealth Organisation
The fact that there are gaps in both the current science and technology meant that the
Beathealth project needed to be a highly multidisciplinary endeavor, requiring input
from physiology scientists, medical consultants, music technology researchers, and
software engineers. The BeatHealth project was designed for healthy citizens of various
ages that engage in physical activity and for patients with the movement disorder of
Parkinson's disease. Three primary challenges were identified for the project: (i)
fundamental research aimed at improving information parameters for maximizing the
beneficial effects of rhythmic stimulation on movement kinematics and physiology, (ii)
technological development to a achieve state-of-the-art implementation platform to
deliver the rhythmical stimulation that has attributes of portability, flexibility and
reliability, (iii) the creation of a new IT service in the form of a network-based
application for collecting on the fly kinematic data and sharing them with online with
others such as medical doctors, family, and trainers. The process of facing these
challenges can be illustrated in the three interconnected areas shown in figure 2 below:
Rhythmic stimulation Mobile App
Vitória - ES - Brazil
Cloud‐Service
50
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Figure 2 Three areas of Beathealth and their interconnections
Rhythmic stimulation is about the boosting of motor performance. It is a fundamental
scientific research component of the project. This aims to improve our knowledge of the
auditory stimulation parameters that are best suited for entraining movement. This will
be investigated for both healthy individuals and patients with motor disorders. For
patients with motor disorders it will investigate how to produce more effective novel
therapies using such stimulation parameters. It is of particular interest to find
rehabilitation strategies that can create long term benefits that will extend beyond the
clinic.
For the audio stimuli, attention will be devoted to understanding which type of
stimulus (i.e. existing music or artificially generated signals) best fits the particular
individual preferences and functionalities in relation to the motivational effort. Possibly,
the use of automated composition tools may also help for certain tasks.
The Mobile Application for the Beathealth system is a redevelopment of and builds on
the ideas of D-Jogger (Moens, Van Norden, and Leman, 2010), which was previously
developed by one group in the project consortium. The structure is that a sensor or
sensors detect bodily movement and complimentary physiological responses, and these
sensor responses are transmitted to a mobile device that is carried by the user.
Processing of the sensor signals is required to smooth out noisy fluctuations if the user
is engaged in vigorous activity. Special algorithms are required in the case of multiple
sensors to fuse the signals together into a single waveform that is used to synchronize
the auditory stimulation with the rhythm of the activity in an optimal manner.
Self‐Entrainment of the user
Sensor input Mobile App Audio output Figure 3 The user process of Self-Entrainment using the mobile app
The mobile application will contain the playlist of audio stimuli. It will either reside on
the device itself or be streamed over the network. A music synchronization (Moens,
Van Norden, and Leman, 2010), algorithm will be responsible for the alignment of the
tempo of the audio stimuli with the movement. Thus, the user will exhibit self-entrain
their movements with the audio, which will in turn effect the periodicities of the sensor
input. Figure 3 shows a block diagram of the process and it is obvious that it matches
the feedback system of Figure 1. Lastly, the mobile application will be available to run
on a cost-effective smartphone platform.
51
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
The Cloud Service is a network-based application for the visualizing and sharing of
information on the movement performance collected via the application. This will be
sent on-the-fly over the internet and made available on a dedicated e-Health platform.
The user will be able to create and maintain a profile facilitating ongoing regular
assessment and monitoring of physical fitness and wellbeing. The user’s health
consultants can also access this information for assessment. Examples of current
commercial services are Apple’s HealthKit (Apple, 2014) and Microsoft’s HealthVault
(Microsoft, 2014).
3.1 Beathealth Evaluation
The BeatHealth application will be validated continuously throughout the project with
healthy users and in patients with motor disorders. Close to the end of the project a full
evaluation procedure will be carried out. Indicators of change in performance along
with changes in health status and the motivation to perform physical activity will be
recorded. Alongside this, measurement of the quality of the actual software product
using metrics to assess attributes such as usability and efficiency will be done and
subject to analysis.
4. Ubiquitous Computing and Music
Following mainframe and personal computing, ubiquitous computing is considered to
be the third wave of computing technology (Moloney, 2011). It is also known as
‘Pervasive computing’. The underlying idea is that as technology improves devices
become smaller but with increasing power such that they can be imperceptibly
embedded in the general environment, thus delivering ubiquitous access to a computing
environment (Moloney, 2011). Its benefit is that it simplifies people’s lives with
technology that facilitates that uses sensors to understand what they are doing in the
world and then self-adapts to respond to their users’ needs. The five key components of
ubiquitous computing systems have been determined as being (Kurkovsky, 2007): (1)
Embedded and Mobile Devices (2) Wireless Communications (3) Mobility (4)
Distributed Systems and (5) Context Awareness and Invisibility. Ubiquitous computing
integrates a broad range of research topics, which includes, but is not limited to,
distributed computing, mobile computing, location computing, mobile networking,
context-aware computing, sensor networks, human-computer interaction, and artificial
intelligence. The initial incarnation of ubiquitous computing was in the form of "tabs",
"pads", and "boards" (Weiser, 1991) built at Xerox PARC from 1988-1994. However, it
has come through a revolution with the advent of the mobile smartphone. It facilitates a
Ubiquitous computing that is “invisible, everywhere computing that does not live on a
personal device of any sort, but is in the woodwork everywhere” (Weiser, 1991). The
mobile phone has now become a true manifestation of the pervasive service and is much
easier for the majority of users to conceptualize and interact with (Roussos, Marsh, and
Maglavera, 2005).
Ubiquitous music is a research area that is a subset of ubiquitous computing and
features mobile and networked music, eco-composition and cooperative composition. A
ubiquitous computing music system can be defined as a musical computing
environment that supports multiple users, devices, sound sources and activities in an
integrated way. The technology allows for mobility, social interaction, device
Vitória - ES - Brazil
52
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
independence, and context awareness (Pimenta et al., 2009). However, a Ubiquitous
music systems places strong demands on the computing interface. A good example is
the use of mobile devices. Depending on the desired activity, there may be needs
beyond the screen interface requiring context awareness mechanisms and locationspecific configuration of parameters that necessitate sensor or actuator capabilities to
the system. However, the benefit of the Ubiquitous computing platform for music is that
it may empower both non-musicians as well as musicians to express themselves though
the medium of music in a collective, open-ended manner.
5. Relationship of Beathealth to Ubiquitous Music and Computing
In its architecture the Beathealth application certainly references all the components of
Ubiquitous computing given in Section 4. It facilitates external sensors for gathering
physiological and kinematic information. It runs on a smartphone. Data gathered is
stored on a cloud service. Ideally, the audio tracks will come from a streaming service.
Lastly, it reacts to the user movement by adapting the audio in terms of its beat pattern
to fit with the rhythm of the movement.
With respect to the definition of ubiquitous computing and music the Beathealth
application, with some adjustment, can align itself with the concepts promulgated by the
practitioners in this field. It facilitates an alignment between movement and music so
strictly speaking it is not an instrument of musical expression or composition. However,
the alignment it does facilitate embodies a profound interaction between the human user
and the computing system playing the audio: the rhythmic time-scale of the audio
adapts to the movement of the user. Thus, the user is engaging physically and mentally
with the music in a dynamic feedback system, as in the self-entrainment system of
Figure 1. Additionally, the Beathealth application is not necessarily constrained to use
standard commercial audio tracks as mentioned in Section 3. Actually, it has the
flexibility of allowing the use of artificially generated test signals that can be applied in
the scientific study of movement. An example could be Amplitude Modulated sounds
(Joris, Schreiner, and Rees, 2004). Moreover, if desired, the commercial audio can be
extended or replaced to incorporate other compositionally inspired sounds that a user
may desire or even require. This means that the Beathealth application can be brought
beyond its original intention as an ‘exercise app’ or a novel therapeutic tool for patients
with motor disorders. This can lead to more creative approaches to auralizing all the
potential kinematic features derived within the complete Beathealth framework where
specific apps are just particular manifestations of what it can be configured to achieve.
If it is configured to support multiple users, dealing with user-selectable audio
streams that can be modified by the users’ activities, it can therefore become a
compositional tool within a suitable environment that stimulates kinematic activity. The
activity and setting then define what the Beathealth application can be. Thus, with realtime modification of multiple-user defined audio performed as a collective activity it
can transform the activity into a social and artistic experience. This interacts on many
levels, harnessing the intellectual and the emotional along with their physical selves.
The impact this may have on the sense of wellness could be more profound than just a
kinematic motivator alone.
53
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
6. Conclusion
This paper has discussed the scientific background to the Beathealth project, explaining
its origins from the theories of entrainment, and particularly self-entrainment. It then
explained the organization and components of the Beathealth application itself. The
features of Ubiquitous computing systems were discussed that were followed by the
specifics of such systems when designed for music. Finally, it described how the
Beathealth application fits within these definitions but also suggested how it can be
brought beyond its original health and therapeutic contexts to a vision where it can
embody social interaction among multiple users where it could stimulates a musical
creativity fused with kinematics that could enhance the sense of wellness it can deliver
to collectives of users.
7. Acknowledgement
The Beathealth- ‘Health and Wellness on the Beat’ project (no: 610633) has received
research funding from the European Union under the FP7 program (2011-2014). The
work in this paper reflects only the authors’ views and that the European Union is not
liable for any use that may be made of the information contained therein.
8. References
Pollack, S.,‘Scientists investigate health benefits of music, rhythm and movement’, The
Irish Times. Jan. 14, 2014.
Ellis, R., and Thayer, J.F. , ‘Music and Autonomic Nervous System (Dys)function’,
Music Perception Apr 2010; 27(4): 317–326.
BeatHealth: Health and Wellness on the Beat, 2014,
http://www.euromov.eu/beathealth/homepage
Pimenta, M., Flores, L.V., Capasso, A., Tinajero, P., and Keller, D., ‘Ubiquitous Music:
Concepts and Metaphors,’ in Proc. of the XII Brazilian Symposium on Computer Music,
Recife, 2009, pp. 139-150.
Cooper, G.W., and Meyer, L.B., The rhythmic structure of music, University of Chicago
press, 1960.
Lerdahl, F., and Jackendoff, R., A generative theory of tonal music. Cambridge, MA:
MIT Press, 1983.
Drake, C., Penel, A., and Bigand, E., ‘Tapping in time with mechanically and
expressively performed music’, Music Perception, 18(1), 2000, pp. 1–23.
Schubotz, R., Friederici, A.D., and von Cramon, Y., ‘Time perception and motor
timing: a common cortical and subcortical basis revealed by fMRI’, Neuroimage, 11,
2000, pp. 1–12.
Vitória - ES - Brazil
54
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Danielsen, A., Otnæss, M.K., Jensen, J., Williams, S.C.R., and Østberg, B.C.,
‘Investigating repetition and change in musical rhythm by functional MRI’,
Neuroscience, 275(5), Sept. 2014, pp. 469–476.
Repp, B.H. and Keller, P., ‘Sensorimotor synchronization with adaptively timed
sequences’, Human Movement Science, 27, 2008, pp. 423–456
Zatorre, R. J., Chen, J. L., and Penhune, V. B., ‘When the brain plays music: auditory–
motor interactions in music perception and production’, Nature Reviews Neuroscience
8, 2007, pp. 547-558.
Grahn and J. Brett, M., ‘Rhythm and beat perception in motor areas of the brain’,
Journal of Cognitive Neuroscience, 19(5), 2007, pp. 893–906.
O'Boyle, D. J., Freeman, J.S., and Cody, F.W., ‘The accuracy and precision of timing of
self-paced, repetitive movements in subjects with Parkinson's disease,’ Brain, 119,
1996, pp. 51-70.
Nombela, C., Hughes, L.E., Owen, A. M., and Grahn, J.A., ‘Into the groove: Can
rhythm influence Parkinson's Disease?’, Neuroscience & Biobehavioral Reviews,
37(10), 2013, pp. 2564-2570.
Clayton, M., Sager, R.; and Will, U., ‘In time with the music: The concept of
entrainment and its significance for ethnomusicology,’ ESEM Counterpoint, 1, 2004,
pp. 1–82.
Trost, W., Frühholz, S., Schön, D., Labbé, C., Pichon, S., Grandjean, D., and
Vuilleumier, P., ‘Getting the beat: Entrainment of brain activity by musical rhythm and
pleasantness,’ NeuroImage, 103, 2014, pp. 55-64.
Bolger, D., Trost, W., and Schön, D., ‘Rhythm implicitly affects temporal orienting of
attention across modalities’, Acta Psychologica, 142, 2013, pp. 238-244.
Phillips-Silver, J., Aktipis, A., and Bryant, G., ‘The ecology of entrainment:
Foundations of coordinated rhythmic movement,’ Music Perception, 28 (1), 2010, pp.
3-14.
Wittwer, J.E., Webster, K. E., and Hill, K., ‘Music and metronome cues produce
different effects on gait spatiotemporal measures but not gait variability in healthy older
adults’, Gait Posture. 37, 2013, pp.- 219–222.
Hoffmann, D., Torregrosa, G., and Bardy, B.G., ‘Sound stabilizes locomotor-respiratory
coupling and reduces energy cost’, PLoS ONE, 7(9), e45206, 2012.
Yamaha Corp. BodiBeat, 2007, (http://www.yamaha.com/bodibeat/)
Philips, Activa, 2010, (http://www.usa.philips.com)
Moens, B., Van Norden, L., and Leman, M., ‘D-Jogger: syncing music with walking,’
in Proceedings of the 2010 Sound and music computing conference, Barcelona, Spain.
Apple HealthKit, 2014, (https://www.apple.com/ios/whats-new/health/)
Microsoft HealthVault, 2014, (https://www.healthvault.com/ie/en)
Moloney, M., ‘Into the future – ubiquitous computing is here to stay,’ Dublin Institute
of Technology Paper, 2011.
55
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
Kurkovsky, S., ‘Pervasive computing: Past, present and future’, in Proc. of the 5th
International Conference on Information and Communications Technology (ICICT
2007), Dec. 2007, Cairo, Egypt, pp. 65-71.
Weiser, M., ‘The Computer for the 21st Century’, Scientific American magazine,
265(3), Sept. 1991.
Roussos, G., Marsh, A.J., and Maglavera, S., ‘Enabling pervasive computing with
smartphones’, IEEE Journal of Pervasive Computing, 4(2), April 2005, pp. 20-27.
Joris, P.X., Schreiner, C.E., and Rees, A., ‘Neural Processing of Amplitude-Modulated
Sounds’, Physiological Review, 84(2), Apr. 2004, pp. 541-77.
Vitória - ES - Brazil
56
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Characterizing resources in ubimus1 research: Volatility
and rivalry
Damián Keller
Núcleo Amazônico de Pesquisa Musical (NAP), Universidade Federal do Acre, Rio
Branco, AC, Brasil - Grupo de Música Ubíqua
[email protected]
Abstract. In this paper I identify three methodological approaches to creativitycentered design: the computational approach, the dialogical perspective and the
ecologically grounded framework. And I analyze how these three methods relate to a
current definition of the ubiquitous music field (ubimus). Social interaction is one of
the factors to be accounted for in ubimus experimental studies. I propose the label
social resources for the shared knowledge available within a community of practice.
I identify five aspects of creativity-centered design that have targeted social
resources. Then I discuss material resources as factors to be considered for the
design of ubimus ecosystems and present two new design qualities as variables for
experimental studies: volatility and rivalry. This discussion is framed by a split
between creative products and creative resources which points to three observables:
material resources, material products and material by-products, including creative
waste. I conclude with a summary of the main proposals of the paper and point to
applications of these concepts in experimental design studies.
Resumo. Neste artigo identifico três linhas metodológicas em design criativo – o
enfoque computacional, a perspectiva dialógica e o método cognitivo-ecológico – e
analiso como essas linhas se relacionam com uma definição recente do campo de
pesquisa em música ubíqua (ubimus). A interação social é um dos fatores que devem
ser considerados nos estudos experimentais em ubimus. Proponho o conceito de
recursos sociais para o conhecimento compartilhado dentro da comunidade de
prática e identifico cinco aspectos do design criativo que tratam dos recursos
sociais. Seguidamente discuto os recursos materiais como fatores para o design de
sistemas musicais ubíquos, sugerindo duas qualidades de design como variáveis
para estudos experimentais: a volatilidade e a rivalidade. A proposta tem como
contexto a separação entre os produtos criativos e os recursos criativos, apontando
para três tipos de fatores observáveis: recursos materiais, produtos criativos e
produtos materiais não intencionais, incluindo o lixo criativo. O texto finaliza
resumindo as propostas conceituais e indicando aplicações desses conceitos nos
estudos experimentais de design.
1. Ubimus methodological proposals
Since 2007, our group has been engaged in a multidisciplinary effort to investigate the
creative potential of converging forms of social interaction, mobile and distributed
technologies and materially grounded artistic practices. We have proposed the adoption
of the term 'ubiquitous music' (ubimus) to define practices that empower participants of
musical experiences through socially oriented, creativity-enhancing tools [Keller et al.
2011a]. Ubiquitous music is defined as a research field that deals with distributed
systems of human agents and material resources that afford musical activities through
sustainable creativity support tools. This consensual definition, established through
collaborative work within our community of practice, summarizes the research efforts of
three distinct but complementary methodological approaches to the study of ubimus
1
57
Ubimus: short-hand for ubiquitous music, a research field proposed in [Keller et al. 2011a].
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
phenomena: (1) the computational perspective, (2) the dialogical view, and (3) the
ecologically grounded framework.
1.1. Information
insfrastructure
technology
creative
practices:
proposals
for
ubimus
The computationally oriented perspective on ubimus research has contributed to the
material resources and the creativity support components of the above definition
[Pimenta et al. 2012]. This line of investigation attempts to expand what is currently
known about musical interaction, focusing on human aspects of Information Technology
Creative Practices [Mitchell et al. 2003]. Whether involving computing devices or not,
musical interaction is defined as interaction that produces creative sonic products
through a variety of musical activities. Seen from this light, ubiquitous music comprises
sound oriented activities supported by ubiquitous computing (or ubicomp) concepts and
technology [Weiser 1991]. Material resources and tools are the various kinds of
stationary and portable computing devices integrated into ubimus ecosystems [Flores et
al. 2010; Lazzarini et al. 2012]. Distributed systems of human agents and material
resources generally involve interactive computing processes and synchronous or
asynchronous exchanges of data. Complementarily, musical interfaces comprise the
material and the virtual resources that support musical experiences in real-world
contexts. Therefore, experimental work from the computationally oriented perspective
strives to capture human-computer interactions that occur during actual music making,
independently of the type of interfaces employed, the locations of the participants and
the temporal distribution of the interactions.
Given the multiplicity of factors involved in music making, it comes as no surprise that
ubiquitous music systems place high demands on the design of the support
infrastructure. These requirements are hard to satisfy if the relationships among the
components of the systems are not taken into account. Depending on the context,
devices may provide sensor or actuator capabilities encompassing both stationary and
mobile components. Synchronous activities place high pressure on the computational
resources, especially when synchronous rendering of audio is involved [Lazzarini et al.
2012]. In the context of mobile, external group activities, both reliable connectivity and
the ability to handle fairly large amounts of data may be necessary. When engaged in
musical activities with portable devices, participants may need access to the state of the
system regardless of the location where the action takes place [Pinheiro da Silva et al.
2013b; Keller et al. 2013]. Distributed asynchronous activities require consistent data
representations for simultaneous or intermittent access by multiple users [Miletto et al.
2011; Scheeren et al. 2013; Testa et al. 2013]. While in this scenario time
synchronization support may be forfeited, ensuring persistent data mechanisms across
all the network components is a minimal requirement. The multiplicity of use scenarios
and contexts proposed for ubimus activities [Miletto et al. 2011; Keller et al. 2011a;
Pimenta et al. 2012] relegates the case of the collocated, synchronous performance of
digital musical instruments to the exception rather than the ideal model on which to base
all design decisions. The results of seven years of ubimus research indicate that a
ubiquitous music ecosystem can hardly be considered a musical instrument or a passive
object to be played by a musician. A more appropriate metaphor encompasses agents in
a dynamical system adapting to the local environment and to remotely accessible
resources while carrying out musical activities [Keller et al. 2011a; Lazzarini et al.
2012].
1.2. The dialogical approach to ubiquitous musical phenomena: the local context
Vitória - ES - Brazil
58
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
The focus on human agents and the centrality of sustainability issues suggested by the
proposed definition of ubiquitous music research are grounded on two current
approaches to educational practices: the dialogical perspective pioneered by Paulo Freire
(1999) and the free circulation of know-how and material resources proposed by the
open educational resources initiative [Lima et al. 2012, 2014]. This research agenda is
based on a participatory, community-based, subject-centered view of education [Lima
and Beyer 2010], targeting both formal and informal educational settings.
Paulo Freire’s (1999) educational philosophy pushes the teacher’s role beyond a mere
conduit for technical-theoretical information and encourages active protagonism by the
stakeholders of educational activities. Freire's dialogical conception sharply contrasts
with views that see creativity as a purely mental, individual process. Through hands-on
activity and social interaction among peers, students are stimulated to evaluate their
work. Given the relevance of the local referents, participants are encouraged to reflect
about their own processes and products during musical activities. While keeping tabs on
the local reality, they develop a critical view on their products and creative processes.
Through iterative cycles of exchanges, dialogical methods foster individual and
collective reflections.
Converging trends in creative practice research, educational research and music
education point to the local context as a key factor in shaping creativity in educational
settings [Burnard 2007; Keller 2000; Loi and Dillon 2006; Keller et al. 2010]. Loi and
Dillon (2006) propose that adaptive educational environments can be designed as
creative spaces that foster interaction through situational and social dynamics.
Technology becomes a key resource in this type of educational environments. Burnard
(2007) applies this framework within the music domain by placing creativity and
technology as the two central forces enabling innovative educational practices. She cites
the use of online and collaborative technology as enablers for creativity in educational
settings, proposing practice, participation and collaborative networking as objectives of
music education research.
These situated, socially informed approaches stand in stark contrast to the standard
educational views on musical creativity. While standard models were concerned with
activities that (in theory) could be carried out without the need for social interaction or
place-specific experience, such as ‘problem-solving’ and ‘thinking’ [Webster 2003],
situated approaches bring socially acquired musical experience to the forefront of the
research agenda. Thus, they highlight two aspects that need to be considered in
creativity-centered design: the place factor and the mutual processes of adaptation that
emerge through social interactions. Both aspects can be handled by methods proposed in
the context of ecologically grounded creative practices.
1.3. Ecologically grounded creative practices
Western art practices have usually focused on what to do with musical materials, rather
than what to do to empower people as creative musicians. Arguably, music creation can
only be carried out by well-prepared, creative individuals who are versed in the secrets
of Euterpe [Euterpe – Εὺτέρπη – 'well delighting' from Indo-European 'ei', 'to go' and
'terp-', 'to satisfy oneself']. Special stress has been placed on the concept of the
individual activity done for self-fulfilling purposes. The view on musical creativity as an
individual activity has also been adopted by technologically based musical practice. As a
result, the constraints formerly imposed by acoustic instrumental writing – such as
working indoors – and the exclusion of the audience as active participant in the creative
process were inherited by mainstream computer music practices (see Wishart 2009 for
an example of this perspective). Sonic art remains an activity carried out in the isolation
59
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
of the studio. This gap between the organizational systems applied on the musical
material and the context where the material resources are gathered enforces creative
techniques based on the objectification of sound. The studio as a compositional
environment follows the model of the physics or the biology lab. Sounds are isolated
and dissected according to well-established protocols, giving the composer total control
over his creative product. The studio-centered working methods enforce the idea that the
creative process consists of abstract relationships among sound objects masterfully
executed by a well-trained musician.
In the late 1990s, the application of embedded-embodied theories on cognition [Gibson
1979] laid out a path to an alternative view of musical creativity. Windsor (1995) and
Keller (1999a; 2000) provided the initial coverage of the embedded-embodied approach
to music making and music perception literature. Through an acute and highly critical
essay, Windsor (1995) brought several ecological concepts into the realm of musical
analysis. His proposal – although tuned to the demands of studio-centered
electroacoustic practice – highlighted the close affinity between sonic art practices and
ecologically oriented theoretical efforts. He attempted to establish a bridge between the
concept of affordance and the triadic representational model proposed by Peirce (1991),
arguing for a sign-oriented reinterpretation of affordances. Working independently from
a complementary perspective, Keller and Truax (1998) proposed a Gibsonean approach
to music making. Ecologically grounded synthesis techniques were presented as a proof
of concept of the applicability of the embedded-embodied view on cognition within the
context of creative music making. Two ecologically grounded works featured examples
of natural synthetic textures and everyday sonic events: “... soretes de punta.” (Keller
1998; see [Basanta 2010] for a thorough analysis of this piece) and touch'n'go [Keller
2000].
After Windsor's and Keller's initial proposals, several artists embraced embeddedembodied cognition as a conceptual and methodological basis for their creative practice.
Matthew Burtner (2005; 2011) realized a number of compositional experiences
involving field recordings and interactive techniques. As a reference to early perceptual
research [Vanderveer 1979], he labelled his work 'ecoacoustics.' Agostino Di Scipio
(2002) expanded the palette of synthesis techniques by applying iterated functions to
produce natural textures. His compositional work Audible Ecosystemics [Di Scipio
2008] featured the use of space as a key parameter for real-time creative practices.
Natasha Barrett (2000) and Tim Opie proposed techniques for gathering acoustic field
data produced by animals and physical agents [Opie and Brown 2006]. Barrett's
compositional work included the use and implementation of spatialization techniques
based on ambisonics. Davis (2008) and Basanta (2010) adopted ecologically oriented
approaches to increase the participatory appeal of their sonic installations. And Nance
(2007) and Lockhart introduced ecologically grounded practices into the realm of
instrumental composition [Lockhart and Keller 2006].
A common denominator of ecologically grounded creative practices is the close
integration of sound processes shaped after natural phenomena with perceptual and /or
social factors wrought by everyday experience. The ecocompositional paradigm that has
emerged from the multiple creative projects realized since 1997 encompasses two
strategies: (1) the construction of a theoretical framework for creative practices
supported by embedded-embodied cognitive mechanisms [Keller 2000; Keller and
Capasso 2006; Keller 2012]; and (2) the concurrent development of design techniques
coherent with this theoretical scaffolding, featuring participation and emergence as the
two central creative driving forces [Keller et al. 2011a]. Soundscape composition
brought real-world context into the musical work. Ecocomposition sought to place
Vitória - ES - Brazil
60
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
music creativity into real-world contexts. During the last decade, two strategies were
developed for this purpose. On the one hand, music making involved reenacting
experiences in their original geographical milieu [Keller 2004]. On the other, musical
works were co-composed with the public [Keller 2000; Keller et al. 2011a]. Thus,
ecocomposition took the act of creation out of the realm of the studio. Techniques such
as accumulation and enactive social interaction helped to lower the usability
requirements of musical systems, bringing the audience into the creative act [Keller et
al. 2002].
2. Social factors: communities of practice
One of the objectives of ubiquitous music research is to gather insights on the
relationships between the subjects’ profiles and the strategies they use to handle the
creative tasks. Subjects may choose to approach the creative activity by applying
previously learned strategies. Sometimes, this background knowledge may not be
applicable to technologically enhanced environments. So ubiquitous music experiments
have adopted a parsimonious method for increasing tool access without hindering reuse
of previous knowledge [Lima et al. 2012; Keller et al. 2013]. Tools are presented as
opportunities for interaction, but they are not given as requirements until a series of
preliminary planning studies has been completed. Depending on their specific profile
and their previous experience, some subjects take advantage of computationally based
support while others limit their actions to simple forms of sonic manipulation. Again,
this aspect of the procedural dimension is treated as a variable to be observed instead of
being a predetermined condition.
Community-based methods are at the center of ubiquitous music practice [Pimenta et al.
2012]. The free access to know-how and the fast circulation of resources within social
groups with common objectives foster the emergence of a phenomenon quite relevant to
ubiquitous music research: the communities of practice [Wenger 2010]. A community of
practice is a social system that arises out of learning and exchange processes. This type
of community unfolds through practice, not prescription [Wenger 2010:192], so it can
be seen as an extension of the dialogical perspective [Freire 1999; Lima et al. 2012].
Take as an example open-source communities. Communities that are nimble and
flexible – consisting of volunteer developers who make contributions either individually
or as part of temporary teams with shared governance – foster imagination, engagement
and consensus [Pimenta et al. 2012]. Brown and Dillon (2007) and Bryan-Kinns (2004)
network music experiments suggest that these characteristics afford increased levels of
participation in musical activities. Therefore, communities of practice should constitute
a fertile context for creativity-centered design.
Summing up how social factors have impacted ubimus research, this section has focused
on the use of social resources at several levels: (a) ubimus planning studies have
provided insights on the relationships between the subjects’ profiles and the strategies
they use to handle the creative tasks [Lima et al. 2012]; (b) community exchanges of
material and social resources have been used to support learning activities; (c)
communities of practice were employed as the social grounding for creativity-centered
design activities; (d) social interactions were used as tools for design assessment and
critical evaluation; (e) socially shared resources have served as a factor for growth and
consolidation of a community of practice engaged in ubiquitous music research
[Pimenta et al. 2012]. By fostering social exchanges among music practitioners, the
activity of prototyping creative products has been incorporated into creativity-centered
design. Design activities – involving negotiation among artistic, computational and
educational perspectives – have helped to adjust the objectives and methods of the
ubimus research agenda. And at the longest time span, the formation of a ubiquitous
61
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
music community of practice – encompassing both novice practitioners and experienced
designers – has encouraged the circulation of material and social resources feeding the
community’s sustainable growth.
3. Material factors: volatility and rivalry
Keller and coauthors (2011b) define the material dimension as the collection of
resources available to the participants of a creative activity. In the case of ubiquitous
music systems, the material dimension encompasses the sound sources and the tools
used to generate creative musical products and the material results of the musical
activity. Music creativity models that emphasize the material dimension provide the
most direct window to experimental observation. Two of the three interrelated stages
suggested by Dingwall (2008) – the generation stage and the development stage – can
easily be assessed by measuring the quantity of the material produced. The stage putting
the pieces together may involve selection, grouping and disposal of material resources;
therefore both objective and subjective assessments may be necessary. Objective
assessment demands measurements of the resource yield and the resource consumption
as a function of time [Ferraz and Keller 2012; Keller et al. 2011c]. Bennett’s (1976)
model suggests that musical creative processes start from a single germinal idea. Collins
(2005) also adopts this view but allows for several musical ideas (he calls them themes
or motifs) at the initial stage. Contrastingly, Hickey (2003), Burnard and Younker
(2004), Chen (2006) and Dingwall (2008) models suggest that exploratory activities
precede the selection of materials. The methodological difficulty resides in the task
choice for creativity assessment experiments. The underlying hypothesis is – as
suggested by Hickey, Burnard and Younker, Chen and Dingwall models – that both
restricting and providing access to materials are part of the compositional process.
Therefore, by selecting materials or tools the experimenter is taking the place of the
composer and the resulting data cannot be used to determine whether the creative
musical activity begins by exploratory actions or by a well-defined procedural plan with
an explicit material objective. When the musical materials are given by the
experimenter, it is not possible to draw conclusions regarding how the material
resources are collected. This methodological problem is called early domain restriction
[Keller et al. 2011b].
Focusing on creative music making as an activity [Barreiro and Keller 2010] has several
implications on the study of material resources. Ubiquitous music phenomena involve
both the locally available objects and the remote materials accessible through
technological infrastructure. Therefore, we need to consider at least two types of
resources: 1. the resources present on site, defined in the creativity literature as the place
factor (i.e., collocated resources), and 2. the materials accessed through creativity
support tools [Shneiderman 2007] which may or may not be collocated (i.e., distributed
resources). Iannis Xenakis (1971/1992) suggested that creative musical activities may
occur in-time or out-of-time. This idea has been adopted by the human-computer
interaction literature under the labels of synchronous and asynchronous activities
[Miletto et al. 2011]. Applying this notion to material resources introduces a new target
for experimental work. Some materials may only become available during the creative
activity and cannot be recycled for future use. Other resources may be repeatedly used in
the context of asynchronous creative work. An example of the former case are the
improvisatory performances based on network infrastructure. Each participant's action
depends on the sonic cues provided synchronously by the other participants. These sonic
cues are only available in-time, therefore they can be classified as volatile material
resources. Other resources can be incorporated in the context of iterative cycles of
creative activity. A good example is provided by the concept of musical prototype
Vitória - ES - Brazil
62
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
[Miletto et al. 2011]. A musical prototype is a data structure that supports actions by
multiple users through a network infrastructure. A single creative product is shared by
the participants collaborating throughout the creative cycle. Participants access the
musical prototype remotely and cooperate by doing direct modifications and by
providing comments on their actions and on their partners' actions. Creative decisions
are the result of a cumulative process of material exchanges that can last from a few
hours to several months. Hence, we can say that a musical prototype is a non-volatile
material resource.
Recent theoretical proposals on creativity generally label the results of creative activity
as 'products' [Kozbelt et al. 2010]. If we take into account the ongoing mutual
adaptations among agents and objects during creative activities [Keller and Capasso
2006], a functionally oriented description of the material resources becomes necessary.
Material results of creative activity may be either resources or products depending on
their role within the context of the activity. For example, the sounds collected in San
Francisco's Bart transportation system (metro or subway) served as material resources
for the creative product Metrophonie [Keller 2005]. The same collection of sounds were
expanded through ecological modeling techniques [Keller and Berger 2001; Keller and
Truax 1998] to be employed as material resources within the multimedia installation
The Urban Corridor [Capasso et al. 2001]. In The Urban Corridor, the action of the
participants shape the organization of the sonic matter [Keller 2012; Keller et al. 2002].
Every instance of the piece produces a personalized creative product that is different
each time the installation is visited. In this case, instead of being delivered as a single
creative product, the sound sources of The Urban Corridor are available as material
resources for the creative actions exerted by the audience. Hence, while the sound
sources and creative products can be clearly separated in Metrophonie, this separation is
not possible in The Urban Corridor. In the latter, sound sources remain as material
resources and the creative product is equated to the emergent qualities of the interaction
among multiple agents within the ubiquitous music ecosystem.
A group of perspectives that has direct application in ubiquitous music research
comprises the psycho-economic theories of general creativity [Rubenson and Runco
1992, 1995; Sternberg and Lubart 1991]. The underlying assumption of this group of
theories is that creative activity both demands and produces resources. Economically
oriented approaches provide opportunities for observation and quantification of
variables that are hard to assess within other creativity paradigms (for a comparison
among creative theories see Kozbelt et al. 2010). Given that available resources for
creative activity are finite, they may be quantified. By observing the flux of
consumption and production of resources, quantitative predictions may be linked to
specific environmental conditions. The effectiveness of the creative strategy can be
assessed by comparing the use of resources with the creative yield. The type of creative
outcomes could be predicted by identifying what resources are available and how they
are used throughout the creative cycle. And the relationship between resource
consumption and creative waste can be used to assess the sustainability of the creative
ecosystem under observation. Consequently, creative potentials and creative
performance become linked to specific variables that can be studied through empirical
work. Observable resources become the focus of the experiments, opening a window to
quantitative comparisons among different strategies for support of creative activities.
From an economy-oriented perspective, material resources may be rival or non-rival.
Rival resources lose value when shared. Non-rival resources can be widely distributed
without losing value. Information is a good example of a non-rival resource.
Information can be freely shared without any impact on its social value. Contrastingly, if
63
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
a food stock is partitioned within a community its value is reduced proportionally to its
depletion rate. An empty food stock has no social value.
There are some interesting observations to be gathered through the application of the
quality of rivalry in creativity-centered design. Resources for creative activities can be
characterized by their level of relevance and originality [Weisberg 1983]. In the context
of group activities, these two factors constitute opposite forces [Ferraz and Keller 2012].
Creative resources that are unique and have not been shared among group members
keep their creative potential and have a high level of originality. Through sharing,
original resources lose their creative potential while they gain acceptance among group
members. The most relevant resources are the ones most widely distributed with the
highest social acceptance. Therefore since creative rival (c-rival) resources lose value
through social acceptance, they can negatively impact originality. On the other hand,
creative non-rival (c-non-rival) resources can be freely distributed without affecting
originality. Given that c-non-rival resources can be widely shared, they can attain higher
levels of relevance than the c-rival resources.
Sound samples can be classified as creative rival resources. The novelty of the creative
products that use samples decrease proportionally to the number of copies of the original
sound. Deterministic synthesis models generate the same sound for the same set of
parameters, so they can also be classified as c-rival resources. Given that physical
objects produce different sonic results each time they are excited, the events they
produce can be classified as c-non-rival resources. On a similar vein, an stochastic
synthesis algorithm can render multiple events without producing repeated instances
[Keller and Truax 1998]. Timbre-based musical practices – such as the use of distorted
guitar sounds – are also examples c-non-rival resources 2. An example of a creative
application of resource degradation is provided by [Fenerich et al. 2013]. The authors
used an iterative network transmission process to emulate the sonic feedback
mechanism proposed by Alvin Lucier (1969) in his piece I am sitting in a room...
[Lucier and Simon 2012]. In Fenerich's and coauthors' piece the disruptive noises of the
network transmission furnish new material as each copy of the sound is sent through the
network. The sonic output is the result of multiple degraded copies of the original
sound.
4. Summary and implications for creativity-centered design
Taking as a point of departure the current definition of ubiquitous music – “a research
field that deals with distributed systems of human agents and material resources that
afford musical activities through sustainable creativity support tools” – I proposed the
use of two design qualities in creativity-centered experimental work: volatility and
rivalry. Ubiquitous music experiments need to assess their resource usage through
observations of creative products and material resources. While some creative
techniques provide a high product yield, other methods tend to produce high levels of
creative waste. Therefore, creative waste assessments may furnish a window to the
resource flow mechanics of ubiquitous music ecosystems. From a resource-flow
perspective, the volatility of the material resources employed is a design quality that can
be applied to gauge the level of support for asynchronous activities. Persistent resources,
such as network-shared musical data allied to consistent metaphors for interaction, may
prove useful to support creative activities across multiple devices, involving access by
multiple stakeholders. Ubimus research carried out during the last seven years suggests
that the resources' volatility should be taken into account when designing ubimus
ecosystems. Creative rival resources do not add value to the creative product when
2
This example was provided by an anonymous reviewer.
Vitória - ES - Brazil
64
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
shared. Therefore, distribution of copies of creative rival resources among group
members should be reduced to a minimum. This limitation does not apply to the case of
creative non-rival resources, (e.g. synthesis techniques that generate new material for
each iteration [Keller and Truax 1998]). These resources can be shared without
imposing a steep reduction on the originality of the stakeholders' creative products.
5. References
Bennett, S. (1976). The process of musical creation: Interview with eight composers. Journal of
Research in Music Education 24, 3-13.
Brown, A. R. & Dillon, S. C. (2007). Networked improvisational musical environments:
learning through online collaborative music making. In J. Finney & P. Burnard (eds.),
Teaching Music in the Digital Age (pp. 96-106). Continuum International Publishing Group.
Barreiro, D. L. & Keller, D. (2010). Composing with sonic models: fundamentals and
electroacoustic applications (Composição com modelos sonoros: fundamentos e aplicações
eletroacústicas). In D. Keller & R. Budasz (eds.), Criação Musical e Tecnologias: Teoria e
Prática Interdisciplinar. Goiânia, GO: Editora ANPPOM.
Barrett, N. (2000). A compositional methodology based on data extracted from natural
phenomena. In Proceedings of the International Computer Music Conference (ICMC 2000).
Ann Arbor, MI: MPublishing, University of Michigan Library.
Basanta, A. (2010). Syntax as Sign: The use of ecological models within a semiotic approach to
electroacoustic
composition.
Organised
Sound
15,
125-132.
(Doi:
10.1017/S1355771810000117.)
Bryan-Kinns, N. (2004). Daisyphone: the design and impact of a novel environment for remote
group music improvisation. In Proceedings of the 5th Conference on Designing Interactive
Systems: Processes, Practices, Methods and Techniques (pp. 135-144). New York, NY:
ACM. (ISBN: 1-58113-787-7.)
Burnard, P. & Younker, B. A. (2004). Problem-solving and creativity: insights from students’
individual composing pathways. International Journal of Music Education 22, 59-76.
Burnard, P. (2007). Reframing creativity and technology: promoting pedagogic change in music
education. Journal of Music Technology and Education 1(1), 37-55. (Doi:
10.1386/jmte.1.1.37/1.)
Burtner, M. (2005). Ecoacoustic and shamanic technologies for multimedia composition and
performance. Organised Sound 10 (1), 3-19. (Doi: 10.1017/S1355771805000622.)
Burtner, M. (2011). EcoSono: Adventures in interactive ecoacoustics in the world. Organised
Sound 16 (3), 234-244. (Doi: 10.1017/S1355771811000240.)
Chen, C. W. (2006). The creative process of computer-assisted composition and multimedia
composition: Visual images and music. Doctor of Philosophy Thesis. Melbourne: Royal
Melbourne Institute of Technology.
Collins, D. (2005). A synthesis process model of creative thinking in music composition.
Psychology of Music 33 (2), 193-216. (Doi: 10.1177/0305735605050651.)
Davis, T. (2008). Cross-Pollination: Towards an aesthetics of the real. In Proceedings of the
International Computer Music Conference (ICMC 2008). Ann Arbor, MI: MPublishing,
University of Michigan Library.
Dingwall, C. (2008). Rational and intuitive approaches to music composition: The impact of
individual differences in thinking/learning styles on compositional processes. Bachelor of
Music Dissertation. Sidney: University of Sydney.
Di Scipio, A. (2008). Émergence du son, son d'emergence: Essai d'épistémologie expérimentale
par un compositeur. Intellectica 48-49, 221-249.
65
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
Di Scipio, A. (2002). The Synthesis of Environmental Sound Textures by Iterated Nonlinear
Functions, and its Ecological Relevance to Perceptual Modeling. Journal of New Music
Research 31 (2), 109-117. (Doi: 10.1076/jnmr.31.2.109.8090.)
Eaglestone, B., Ford, N., Brown, G. J. & Moore, A. (2007). Information systems and creativity:
an empirical study. Journal of Documentation 63 (4), 443-464.
(Doi:
10.1108/00220410710758968.)
Fenerich, A., Obici, G. & Schiavoni, F. (2013). Marulho TransOceânico: Performance musical
entre dois continentes. In E. Ferneda, G. Cabral & D. Keller (eds.), Proceedings of the XIV
Brazilian Symposium on Computer Music (SBCM 2013). Brasília, DF: SBC.
Ferraz, S. & Keller, D. (2012). Preliminary proposal of the MDF model of collective creation
(MDF: Proposta preliminar do modelo dentro-fora de criação coletiva). In Proceedings of
the III Ubiquitous Music Workshop (III UbiMus). São Paulo, SP: Ubiquitous Music Group.
Flores, L. V., Pimenta, M. S., Miranda, E. R., Radanovitsck, E. A. A. & Keller, D. (2010).
Patterns for the design of musical interaction with everyday mobile devices. In Proceedings
of the IX Symposium on Human Factors in Computing Systems (pp. 121-128). Belo
Horizonte, MG: SBC.
Freire, P. (1999). Pedagogy of Hope (Pedagogia da Esperança: Um Reencontro com a
Pedagogia do Oprimido). Rio de Janeiro, RJ: Paz e Terra.
Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin.
(ISBN: 0898599598.)
Hickey, M. M. (2003). Creative thinking in the context of music composition. In M. M. Hickey
(ed.), Why and How to Teach Music Composition: A New Horizon for Music Education (pp.
31-53). Reston, VA: MENC, The National Association for Music Education.
Hutchins, E. (2010). Cognitive ecology. Topics in Cognitive Science 2 (4), 705-715. (Doi:
10.1111/j.1756-8765.2010.01089.x.)
Keller, D. (1998). "... soretes de punta.". In , Vol. Harague II [Compact Disc]. New
Westminster, BC: earsay productions.
Keller, D. (2000). Compositional processes from an ecological perspective. Leonardo Music
Journal, 55-60. (Doi: 10.1162/096112100570459.)
Keller, D. (2004). Paititi: A Multimodal Journey to El Dorado. Doctor in Musical Arts Thesis,
Stanford, CA: Stanford University.
Keller, D. (2012). Sonic ecologies. In A. R. Brown (ed.), Sound Musicianship: Understanding
the Crafts of Music (pp. 213-227). Newcastle upon Tyne, UK: Cambridge Scholars
Publishing. (ISBN: 1-4438-3912-4.)
Keller, D., Barreiro, D. L., Queiroz, M. & Pimenta, M. S. (2010). Anchoring in ubiquitous
musical activities. In Proceedings of the International Computer Music Conference (pp.
319-326). Ann Arbor, MI: MPublishing.
Keller, D. & Capasso, A. (2006). New concepts and techniques in eco-composition. Organised
Sound 11 (1), 55-62. (Doi: 10.1017/S1355771806000082.)
Keller, D., Flores, L. V., Pimenta, M. S., Capasso, A. & Tinajero, P. (2011a). Convergent trends
toward ubiquitous music. Journal of New Music Research 40 (3), 265-276. (Doi:
10.1080/09298215.2011.594514.)
Keller, D., Lima, M. H., Pimenta, M. S. & Queiroz, M. (2011b). Assessing musical creativity:
material, procedural and contextual dimensions. In Proceedings of the National Association
of Music Research and Post-Graduation Congress - ANPPOM (pp. 708-714). Uberlândia,
MG: ANPPOM.
Keller, D., Otero, N., Pimenta, M. S., Lima, M. H., Johann, M., Costalonga, L. & Lazzarini, V.
(2014). Relational properties in interaction aesthetics: The ubiquitous music turn. In
Proceedings of the Electronic Visualisation and the Arts Conference (EVA-London 2014).
London: Computer Arts Society Specialist Group.
Vitória - ES - Brazil
66
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Keller, D. & Truax, B. (1998). Ecologically based granular synthesis. In Proceedings of the
International Computer Music Conference (pp. 117-120). Ann Arbor, MI: MPublishing,
University of Michigan Library.
Kozbelt, A., Beghetto, R. A. & Runco, M. A. (2010). Theories of Creativity. In J. C. Kaufman
& R. J. Sternberg (ed.),, Vol. The Cambridge Handbook of Creativity. Cambridge, UK:
Cambridge University Press. (ISBN: 9780521730259.)
Lazzarini, V., Yi, S., Timoney, J., Keller, D. & Pimenta, M. S. (2012). The Mobile Csound
Platform. In Proceedings of the International Computer Music Conference (pp. 163-167).
Ann Arbor, MI: MPublishing, University of Michigan Library.
Lima, M. H. & Beyer, E. (2010). An experience in musical education and new technologies in
school context with Brazilian young people: Reflections and perspectives. In Music
Education Policy and Implementation: Culture and Technology. Proceedings of the 15th
International Seminar of the Policy Commission on Culture, Education and Media, 74-79.
Kaifeng: ISME.
Lima, M. H., Keller, D., Pimenta, M. S., Lazzarini, V. & Miletto, E. M. (2012). Creativitycentred design for ubiquitous musical activities: Two case studies. Journal of Music,
Technology and Education 5 (2), 195-222. (Doi: 10.1386/jmte.5.2.195_1.)
Lockhart, A. & Keller, D. (2006). Exploring cognitive process through music composition. In
Proceedings International Computer Music Conference (ICMC 2006) (pp. 9-12). Ann Arbor,
MI: MPublishing, University of Michigan Library.
Loi, D. & Dillon, P. (2006). Adaptive educational environments as creative spaces. Cambridge
Journal of Education 36 (3), 363-381. (Doi: 10.1080/03057640600865959.)
Lucier, A. & Simon, D. (2012). Chambers: Scores by Alvin Lucier. Middletown, CT: Wesleyan
University Press. http://muse.jhu.edu/books/9780819573087.
Miletto, E. M., Pimenta, M. S., Bouchet, F., Sansonnet, J.-P. & Keller, D. (2011). Principles for
music creation by novices in networked music environments. Journal of New Music
Research 40 (3), 205-216. (Doi: 10.1080/09298215.2011.603832.)
Miller, S. (2005). Audible-Mobiles: An application of eco-systemic programming in Kyma. In
N. Zahler (ed.), Proceedings of Spark: Festival of Electronic Music and Art (pp. 37-39).
Minneapolis, MN: University of Minnesota.
Mitchell, W. J., Inouye, A. S. & Blumenthal, M. S. (2003). Beyond Productivity: Information
Technology, Innovation, and Creativity. Washington, DC: The National Academies Press.
Nance, R. W. (2007). Compositional Explorations of Plastic Sound. Doctoral Thesis in Music,
De Montfort University.
Odena, O. (2012). Musical Creativity: Insights from Music Education Research. Ashgate
Publishing Company. (ISBN: 9781409406228.)
Peirce, C. S. (1991). Peirce on Signs: Writings on Semiotic[s]. J. Hoopes (ed.). Chapel Hill,
NC: University of North Carolina Press. (ISBN: 9780807843420.)
Pimenta, M. S., Miletto, E. M., Keller, D. & Flores, L. V. (2012). Technological support for
online communities focusing on music creation: Adopting collaboration, flexibility and
multiculturality from Brazilian creativity styles. In N. A. Azab (ed.), Cases on Web 2.0 in
Developing Countries: Studies on Implementation, Application and Use. Vancouver, BC:
IGI Global Press. (ISBN: 1466625155.)
Pinheiro da Silva, F., Keller, D., Ferreira da Silva, E., Lazzarini, V. & Pimenta, M. S. (2013a).
Creativity in everyday settings: The impact of anchoring (Criatividade em ambientes
cotidianos: o impacto do fator de ancoragem). In Proceedings of the IV Ubiquitous Music
Workshop (IV UbiMus). Porto Alegre, RS: Ubiquitous Music Group.
Pinheiro da Silva, F., Keller, D., Ferreira da Silva, E., Pimenta, M. S. & Lazzarini, V. (2013b).
Everyday musical creativity: exploratory study of ubiquitous musical activities (Criatividade
musical cotidiana: estudo exploratório de atividades musicais ubíquas). Música Hodie 13,
64-79.
67
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
Rubenson, D. L. & Runco, M. A. (1995). The psychoeconomic view of creative work in groups
and organizations. Creativity and Innovation Management 4 (4), 232-241.
(Doi:
10.1111/j.1467-8691.1995.tb00228.x.)
Rubenson, D. L. & Runco, M. A. (1992). The psychoeconomic approach to creativity. New
Ideas in Psychology
10 (2), 131 - 147. (Doi: http://dx.doi.org/10.1016/0732118X(92)90021-Q.)
Schaeffer, P. (1977). Traité des Objets Musicaux: Essai Interdisciplines. Paris: Éditions du
Seuil. (ISBN: 9782020026086.)
Schafer, R. M. (1977). The Tuning of the World. New York, NY: Knopf.
Scheeren, F. M., Pimenta, M. S., Keller, D. & Lazzarini, V. (2013). Coupling social network
services and support for online communities in codes environment. In A. L. Koerich & G.
Tzanetakis (eds.), Proceedings of the 14th International Society for Music Information
Retrieval Conference (ISMIR 2013) (pp. 134-139). Curitiba, PR: ISMIR.
Shneiderman, B. (2007). Creativity support tools: accelerating discovery and innovation.
Communications of the ACM 50 (12), 20-32. (Doi: 10.1145/1323688.1323689.)
Sternberg, R. & Lubart, T. (1991). An Investment Theory of Creativity and Its Development.
Human Development 34 (1), 1-31.
Truax, B. (2002). Genres and techniques of soundscape composition as developed at Simon
Fraser University. Organised Sound 7 (1), 5-14. (Doi: 10.1017/S1355771802001024.)
Webster, P. (2003). Asking music students to reflect on their creative work: Encouraging the
revision process. In Yip, L. C. R., Leung, C. C. & Lau, W. T. (Eds.), Curriculum Innovation
in Music. Hong Kong: The Hong Kong Institute of Education, 16-27.
Weisberg, R. W. (1993). Creativity: Beyond the Myth of Genius. New York, NY: W. H.
Freeman. (ISBN: 9780716723677.)
Wen-Chung, C. (1966). Open rather than bounded. Perspectives of New Music 5 (1), 1-6.
Wenger, E. (2010). Communities of practice and social learning systems: the career of a
concept. In C. Blackmore (Ed.), Social Learning Systems and Communities of Practice.
London: Springer Verlag and the Open University.
Weiser, M. (1991). The Computer for the Twenty-First Century, Scientific American 265(3),
94–101.
Windsor, W. L. (1995). A Perceptual Approach to the Description and Analysis of Acousmatic
Music. Doctoral Thesis in Music, London: City University.
Wishart, T. (2009). Computer music: Some reflections. In R. T. Dean (ed.), The Oxford
Handbook of Computer Music (pp. 151-160). New York, NY: Oxford University Press.
(ISBN: 9780195331615.)
Xenakis, I. (1971/1992). Formalized Music: Thought and Mathematics in Composition.
Hillsdale, NY: Pendragon Press. (ISBN: 9781576470794.)
Zawacki, L. & Johann, M. (2012). A prospective analysis of analog audio recording with web
servers. In Proceedings of the III Ubiquitous Music Workshop (III UbiMus).
http://compmus.ime.usp.br/ubimus/pt-br/node/23.
Vitória - ES - Brazil
68
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Prototyping of Ubiquitous Music Ecosystems
Victor Lazzarini1 , Damián Keller2 , Carlos Kuhn3 , Marcelo Pimenta3 , Joseph Timoney1
Sound and Music Research Group
National University of Ireland, Maynooth
Co. Kildare Ireland
1
Amazon Center for Music Research - NAP
Universidade Federal do Acre - Federal University of Acre
2
Computer Science Department
Universidade Federal do Rio Grande do Sul
3
{victor.lazzarini,joseph.timoney}@nuim.ie,
[email protected], {mpimenta,ckuhn}@inf.ufrgs.br
Abstract. This paper focuses the prototyping stage of the design cycle of ubiquitous music (ubimus) ecosystems. We present three case studies of prototype
deployments for creative musical activities. The first case exemplifies a ubimus
system for synchronous musical interaction using a hybrid Java-JavaScript development platform, mow3s-ecolab. The second case study makes use of the
HTML5 Web Audio library to implement a loop-based sequencer. The third prototype - an HTML-controlled sine-wave oscillator - provides an example of using the Chromium open-source sand-boxing technology Portable Native Client
(PNaCl) platform for audio programming on the web. This new approach involved porting the Csound language and audio engine to the PNaCl web technology. The Csound PNaCl environment provides programming tools for ubiquitous audio applications that go beyond the HTML5 Web Audio framework. The
limitations and advantages of the three approaches proposed - the hybrid Java/JavaScript environment, the HTML5 audio library and the Csound PNaCl infrastructure - are discussed in the context of rapid prototyping of ubimus ecosystems.
1. Introduction
Creativity-centered design of ubiquitous musical systems involves at least four developmental stages: defining strategies, planning, prototyping and assessment. This paper
focuses on the third stage of the design cycle, prototyping. The first section shows related
works in the field and the second places the activity of prototyping within the context of
ubimus design. Then we present a case study focusing on the deployment of a ubimus
system for synchronous musical interaction using a hybrid Java-JavaScript development
platform based on browser technology. The second case involves the use of Web Audio
in HTML5 to implement a loop-based sequencer. And the third case features a simple
example of an HTML-controlled sine-wave oscillator using the Csound PNaCl programming environment. The final section provides a summary of the observations gathered
69
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
during the design of these three prototypes and discusses the limitations and advantages
of each approach.
2. Related work
In recent years, there has been some research (and commercial) work aiming to provide support for development of audio applications for mobile platforms like MobileSTK
[Essl and Rohs 2006], based on STK and released in 2006, with support for Symbiam and
Windows CE devices. This platform was also ported do iOS in 2010 [Bryan et al. 2010]
and incorporated in a toolkit called MOMU. Also from Essl [Essl 2010], we have Urmus,
a LUA framework that is a multi-layered environment intended to support interface design, interaction design, interactive music performance and live patching on multi-touch
mobile devices. Control [Roberts 2011] is an application that allows users to define custom graphic interfaces for MIDI and OSC. The interfaces are defined using web standards like HTML,CSS and Javascript. Roberts also is one of the creators of Gibber
[Roberts et al. 2013], a language for live-coding in the browser. Gibber also has a 2D
drawing API and event handlers for touch, mouse, and keyboard events, enabling fast
prototyping. Since Gibber is centralized on a server, users can create collaborative programming sessions and publish compositions and instruments.
3. Designing ubiquitous music systems
Defining design strategies for ubiquitous music encompasses two areas of expertise: interaction and signal processing. The Ubiquitous Music Group (g-ubimus) has been investigating the musical applications of methods based on human-computer interaction and
ubiquitous computing techniques. Metaphors for interaction provide abstractions that encapsulate solutions applicable to a variety of activities without making unnecessary technical assumptions [Pimenta et al. 2012]. Thus, interaction metaphors materialize general
ergonomic principles to fulfil the human and the technological demands of the activity
[Keller et al. 2010] [Pimenta et al. 2012]. On a similar vein, recurring technological solutions can be grouped as interactions patterns [Flores et al. 2010]. These patterns are
particularly useful when developers face the task of finding suitable strategies to deal
with specific interface implementation issues. So far, our group’s research has unveiled
four musical interaction patterns: natural interaction, event sequencing, process control
and mixing [Flores et al. 2012]. Each of these patterns tackles a specific interaction problem. Natural interaction deals with forms of musical interaction that are closely related to
handling everyday objects. Event sequencing lets the user manipulate temporal information by freeing the musical events from their original time-line. Process control provides
high-level abstractions of multiple parametric configurations, letting the user control complex processes by using simple actions. Mixing can be seen as the counterpart of event
sequencing for synchronous interaction. Musical data - including control sequences and
sound samples - is organized by user actions that occur in-time. Furthermore, technologically based musical environments also demand tailoring support for sound rendering. Signal processing techniques for creative musical activities have to be developed
according to the characteristics of the tasks involved in the creative cycle, the computational resources provided by the support infrastructure and the profile of the target users.
Ubiquitous musical activities may involve mobility, connectivity and coordination among
heterogeneous devices with scarce computational resources. Thus, carefully chosen softVitória - ES - Brazil
70
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
ware design strategies are a prerequisite to tackle signal processing support in ubiquitous
contexts [Lazzarini et al. 2012] [Lazzarini et al. 2014].
Ubiquitous-music planning studies involve early assessment of target population
expectations and identification of opportunities for creativity support. Through a ubimus
planning study, Lima and coauthors (2012)[Lima et al. 2012] found sharply differing expectations on technological usage by musicians and musically naive subjects in educational contexts. Based on these results, they proposed a simple rule of thumb: users
like what comes closer to reenacting their previous musical experiences. Non-technical
approaches, such as those proposed by traditional soundscape activities [Schafer 1977],
may not be suited for introducing non-musicians to sonic composition. Naive subjects
may respond better to technologically oriented approaches, as those found in ecologically
grounded creative practices [Keller et al. 2014]. If the rule of thumb previously stated
holds true, musically naive participants welcome easiness of use and naturality while
musicians tend to prefer interfaces that foster behaviors based on acoustic-instrumental
metaphors and common-practice music notation. Therefore, design of creatively oriented
technologies needs to fulfil different demands depending on the intended user base.
Technological support for pervasive musical activities increases the difficulty of
the design task on two fronts. Ubimus systems may enhance the users’ creative potential
by providing access to previously unavailable material and social resources. But a more
intensive usage of resources can introduce unintended complexities, narrowing the access
to a small user base. Thus, one challenge faced by ubimus designers is to provide intuitive
tools for complex creative tasks. Furthermore, custom-made, special purpose hardware
interfaces - such as those proposed by tangible user interface design approaches - may
fill the requirements of transparency and naturality reducing the cognitive load of complex tasks. But they do not guarantee wide accessibility. In this case, the catch lies in
the financial toll. Special purpose systems are difficult to distribute and maintain. As a
consequence, the user base is narrowed by the increased costs of the hardware.
Previous research indicated that another important difficulty faced by the designers of ubiquitous music tools is the slowness of the validation cycle [Keller et al. 2011a].
Because complete integrated systems are hard to design and test, tools usually deal with
isolated aspects of musical activity. Musicians usage of the tools may not correspond to
the intended design and integration of multiple elements may give rise to unforeseen problems. As a partial solution to these hurdles, the Ubiquitous Music Group has suggested the
inclusion of music making within the development cycle. This integration of music making and software development is based on a broad approach to usability [Hornbaek 2006].
Fine-grained technical decisions are done after the usability requirements of the system
have been well established through actual usage. So rapid deployment is prioritized over
testing on a wide user base. Given the lack of standard support for audio and musical
data formats, initial development of audio applications for mobile platforms was feasible but complex and unintuitive [Keller et al. 2010] [Radanovitsck et al. 2011]. Recent
advances have paved the way to wider distribution of tools within the computer music
community [Brinkmann 2012] [Lazzarini et al. 2012] [Lazzarini et al. 2014]. Within an
iterative approach to design - involving creative musical activities and usability assessments - we have developed rapid prototyping techniques tailored for ubiquitous music
contexts. Since our research targets both interaction and signal processing, flaws that
71
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
arise from coordination among these two processes can be identified early within the design cycle. Furthermore, full-blown creative musical activities uncover opportunities for
creative exploration of the software AND of the local resources [Keller et al. 2013]. The
prototypes reported in the second part of this paper provide examples of the advantages
and limitations of an experimentally grounded approach to the development of ubiquitous
music ecosystems.
The last stage of the ubimus design cycle involves the assessment of creative processes and products, targeting the expansion of the creative potentials and the sustainable
usage of resources. Although creativity assessment is an active area of research within
psychology. [Amabile 1996] assessment of creative outcomes is still a taboo topic among
music practitioners. From a product-centered perspective [Boulez 1986], creativity assessment would be equivalent to the measurement of musical value. This approach makes
two assumptions. First, the objective of musical activity is to obtain a product that can be
labelled as an expression of eminent creativity. Second, the value of the musical product
lies in its material constituents (the sounds or their symbolic representation, i.e., scores
or recordings of performances). In this case, standards are defined by the adopted compositional technique. Given a technique-centered metric, deviations are seen as spurious,
less valuable manifestations. Another problem of the product-centered approach is the
overrated reliance on domain-specific expert judgement. When asked to evaluate musical products - as it is done using Amabile’s (1996) Consensual Assessment Technique experts apply socially accepted views on creativity. These views are the result of several
years of musical training and experience with eminent forms of creativity. Given the different requirements of professional and non-professional participants [Lima et al. 2012],
this bias may render their assessment less useful to everyday-creativity manifestations.
To avoid these pitfalls, ubiquitous music projects rely on a mix of assessment techniques
[Keller et al. 2011b] engaging a small numbers of expert and untrained subjects in different musical activities in a variety of environmental conditions. This is usually described as
’triangulation’ within the behavioral research literature. This approach does not make assumptions regarding the compositional techniques, giving the same weight to musicians’
and lay-people’s feedback. Data is extracted from the emerging relationships among the
user profiles, the activities, the environmental conditions and the support infrastructure.
4. Prototyping platforms for ubiquitous music
During the initial phase of ubiquitous music research (2007-2010), the need of a short
development cycle for ubimus infrastructure was faced with multiple obstacles. On one
hand, web deployment featured little or no support for audio prototyping. Java and Adobe
Flash were the two languages that provided more extensive resources for audio applications [Keller et al. 2011a] [Miletto et al. 2011]. While Java was supported on several
mobile platforms, such as JavaME and Android, Adobe Flash was not always available
on mobile devices. Hybrid approaches to ubimus system development were introduced
involving the use of Javascript and Java-based synthesis engines [Keller et al. 2011b]. An
example of this proposal is the ubimus prototyping environment mow3s-ecolab. We describe a case example of the use of mow3s-ecolab in ubiquitous musical activities: the
Harpix 1.0 study.
More recently, the development of HTML led to the introduction of audio-oriented
web tools. HTML5 features Web Audio and Web Midi JavaScript-based technologies
Vitória - ES - Brazil
72
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
intended as standards for web deployment. Through the implementation of a ubiquitous
music application, we explore some of the potentials and constraints of the Web Audio
platform. We describe the development of the LCM Sequencer HTML5, a prototype
for the creation of loop-based musical patterns. The design of this ubimus prototype
illuminates aspects of the interaction demands of the sequencing activity and highlights
the need for accurate timing support for synchronous usage.
As extensively discussed in a recent survey by Wyse and Subramanian
(2013)[Wyse and Subramanian 2013], the web browser is now a viable platform for the
deployment of music computing applications. Three technologies are dominant in audio
development for world-wide web applications: Java, Adobe Flash, and HTML5 Web Audio. Applications based on Java can be rendered either as Applets or via Java Web Start.
Adobe Flash has grown in support by multiple browser vendors across various operating systems. Flash applications can be deployed as browser plug-ins, as well as through
Adobe Air6. The HTML5 Web Audio framework for Javascript is the newest of these
three technologies. Unlike the others, it is a proposed standard that is designed to be
implemented by the browser vendors.
Web Audio is today possibly the most popular toolkit for audio development on
the web. However it has a number of limitations. Firstly, its set of audio operations is
somewhat limited. Its functionality can be extended by Javascript code, which still pays a
significant performance penalty if compared to natively-compiled C/C++ code. Although
Javascript engines are constantly improving in speed and efficiency, running audio code
entirely in Javascript is a processor intensive task on modern systems. However, the
worst limitation is that the ScriptProcessorNode which is used to extend the API runs on
the main thread. This can result in dropouts when another process on the main thread, for
instance the user interface, interrupts or blocks the audio processing. This severely limits
what is possible to do with Web Audio in practical terms to the built-in processing nodes.
Consequently, we need to look for a technology that allows native applications to do audio
processing beyond what it is possible with Javascript and Web Audio. An alternative is
provided by the Native Clients (NaCl) platform.
The next three sections present examples of the three approaches just discussed:
a prototype using a hybrid Java-JavaScript support system - mow3s-ecolab; an HTML5based prototype using the Web Audio library; and a sine-wave oscillator exemplifying
the usage of the Csound PNaCl programming environment. Each example highlights
key requirements of the support for creativity-centered ubiquitous music design involving
both musical interaction and audio processing capabilities.
5. Case study: Harpix 1.0
5.1. Interaction patterns and metaphors
The first prototype - Harpix 1.0 - exemplifies the use of the spatial tagging
[Keller et al. 2011b]. Spatial tagging is defined as an interaction metaphor that makes
use of virtual or material visual cues - anchors - to support creative activity (fig.
1). Anchors provide a bridge between material and cognitive resources, enhancing
the creative potential. This approach to the design of ubiquitous music systems has
found support in multiple experiments with musicians and non-musicians applying a
closely related interaction metaphor: time tagging [Keller et al. 2010] [Keller et al. 2013]
73
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
[Pinheiro da Silva et al. 2012] [Pinheiro da Silva et al. 2013] [Radanovitsck et al. 2011]
[Pimenta et al. 2012].
Alternatively, Harpix 1.0 can be described as an instantiation of the natural interaction pattern [Flores et al. 2012]. The visual elements of the interface - or anchors - can
be manipulated directly, establishing a straightforward relationship between user actions
and sound events. This section summarizes the results of an experimental study reported
in [Keller et al. 2011b].
5.2. Materials and procedures
MOW3S is a set of tools for multiplatform interface design specifically targeted for web
usage. Given the adoption of the standard HTML syntax, MOW3S can be combined
with any tool implemented in Javascript. User actions are tracked to generate control
data formatted as standard MIDI events which are used to drive the synthesis engine
Ecolab. Ecolab is a wavetable synthesis engine implemented in Java. It features support
for network connections through standard IP sockets. By adopting DLS and General
MIDI standards consistent sonic renditions can be achieved without the need for real-time
streaming of audio. Thus, Ecolab can be used as a multiplatform back-end for desktop
systems with low computational resources.
Using the mow3s-ecolab environment, Keller and coauthors [Keller et al. 2011b]
implemented a prototype based on the spatial tagging metaphor: Harpix 1.0. The Harpix
architecture comprises three layers. On the first layer, user interaction is done through text
input, mouse position tracking and mouse-wheel movement tracking. The second layer
features spatial anchors represented by multiple draggable rectangles distributed on the
browser pane. This layer provides synthesis parameter mappings linked to the positions
of the anchors on the horizontal and vertical axes. The third layer deals with data routing
and sound synthesis.
Figure 1. The spatial tagging metaphor in Harpix 1.0.
Three subjects realized 37 interaction essays, comprising multiple conditions (see
table 1). The experimenters applied the CSI-NAP v.01 protocol to assess the level of
support of the Harpix system for creative musical activities (in a range of 0-10), focusing
on six creativity support factors: productivity, expressiveness, explorability, concentration, enjoyment and collaboration. Enjoyment was high during creative (9.5 ± 1.08) and
Vitória - ES - Brazil
74
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
exploratory activities (8.42 ± 1.78). Expressiveness was also highly rated in creative activities (9.10±0.99). On the other hand, collaboration was poorly judged in all conditions
(5.95 ± 2.84).
Activity/Participants solo duo
creative sessions
4
5
exploratory sessions
4
5
imitative sessions
2
10
i
10
21
trio
3
3
3
6
i
12
12
15
37
Table 1. Matrix of experimental conditions.
N=3,i=37
mean
std. dev.
productivity
7.3
1.68
expressiveness
7.51
2.51
explorability
6.08
2.54
concentration
7.24
2.36
enjoyment
7.89
2.5
collaboration
5.95
2.84
Table 2. Matrix of experimental conditions.
6. Case study 2: LCM Sequencer HTML5
6.1. Interaction patterns and metaphors
The second prototype provides an example of the application of the event sequencing
interaction pattern (figure 2). Multiple loop-based sequences are controlled through a
two-tier grid interaction metaphor. On the first tier - selected by clicking the pattern
option - each line is assigned a timbre. Columns provide a visual representation of the
temporal distribution of the sound events. Color cells indicate events and black cells stand
for pauses. Sequence playback is controlled through three GUI elements: the start button,
the stop button and the tempo, set as beats-per-minute units. Sound events are rendered
through callbacks to the Web Audio synthesis engine. The second tier - available by
choosing the song option - provides a preset mechanism. A drag-and-drop mechanism
supports direct manipulation of the presets orderings. Up to five presets can be sequenced
using the two preset-cell rows.
6.2. Discussion of results
One of the caveats encountered during the preliminary design cycle was the imprecision of
the Javascript timer. To circumvent this limitation we resorted to the use of the setTimeOut
method, implementing a pooling system with higher resolution for event scheduling. This
worked well for stationary platforms but presents occasional problems when running on
the Android operating system. The Web Audio timer was accurate at speeds close to 500
BPM. Audio clicks were observed at higher speeds.
Through informal usage testing, we observed that the drag-and-drop preset feature
provides an effective shortcut for quick comparisons among multiple complex sequences.
This is particularly advantageous when compared to sequencers that are operated through
buttons. Rather than manipulating numeric values, the user has direct access to the reordering operations. Given that the temporal order of the sequences is correlated to the
spatial order of the GUI elements, the anchoring cognitive mechanism furnishes grounding for this interaction metaphor [Keller et al. 2010].
75
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
Figure 2. Two-tier grid interaction metaphor for event sequencing: LCM Sequencer HML5.
7. Case study 3: sine-wave oscillator in Csound PNaCl
7.1. Interaction patterns and metaphors
The sine-wave oscillator demonstrates a minimal application of the functionality provided
by the Csound PNaCl module. Given the didactic objective of the code, we focused on
the use of one controller - represented by an HTML element - and one synthesis parameter - the oscillator’s frequency. This is one of the simplest uses of the process control
interaction pattern.
7.2. Materials and procedures
The Native Clients (NaCl) platform1 allows the use of C and C++ code to create components that are accessible to client-side Javascript, and run natively inside the browser.
NaCl is described as a sandboxing technology, as it provides a safe environment for code
to be executed, in an OS-independent manner [Yee et al. 2009] [Sehr et al. 2010].
The Portable NaCl [Donovan et al. 2010] toolchain, used to implement Csound
in this case study, is completely independent of any existing architecture, and thus it is
available for a variety of systems. However, PNaCl is currently only currently supported
by the Chrome and Chromium browsers (under most operating systems, the iOS and
Android versions do not yet support it). Since version 31, Chrome enables PNaCl by
default, allowing applications created with that technology to work completely out-ofthe-box. PNaCl modules can be served from anywhere in the open web.
1
https://developers.google.com/native-client
Vitória - ES - Brazil
76
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
The port of the Csound language to the PNaCl platform is complete, apart from its
plugin opcodes, which are not available due to non-existence of dynamic loading here. It
allows for realtime audio input and output, and it contains a complete Javascript interface
that is used to control it. MIDI can be used in the form of MIDI files or through the
Javascript implementation (WebMIDI).
7.2.1. Prototype Example
The following script demonstrates a minimal application using the functionality provided by the Csound PNaCl module2 .
It consists of the implementation of the moduleDidLoad() callback, where the Csound engine is
started (with csound.Play() and a simple Csound code is compiled with
csound.CompileOrc(). This will produce a sine wave whose frequency can be controlled by changing the value of the HTML element with id freq:
function moduleDidLoad() {
csound.Play();
csound.CompileOrc(
"schedule 1,0,-1 \n" +
"instr 1 \n" +
"kfr chnget \"freq\" \n" +
"a1 oscili 0.5,kfr \n" +
"outs a1,a1 \n" +
"endin");
SetFreq();
}
function attachListeners() {
document.getElementById("freq").
addEventListener("change",SetFreq);
}
function SetFreq(){
var val = document.getElementById("freq").value;
csound.SetChannel("freq", val);
}
8. Discussion of results
To test the application of the spatial tagging metaphor, our team implemented a prototype
based on Java and Javascript browser technology to support creative musical activities:
Harpix 1.0. Harpix was used in an experiment encompassing three types of musical activities by three subjects. The assessment of creativity support indicated a high performance
in the creative and exploratory activities, with particular emphasis on two factors: enjoyment and expressiveness. However, the collaboration and explorability factors were not
evaluated positively. Imitative activities also yielded low scores.
2
77
http://vlazzarini.github.io/docs/pnacl_csound.html
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
A second prototype used the HTML5 Web Audio library to support the application of the event sequencing interaction pattern. A two-tier interaction metaphor was
developed for synchronous manipulation of complex-sequence orderings. The presetcells drag-and-drop mechanism showed good potential during preliminary testing, hinting at a common ground for this interaction metaphor, time tagging and spatial tagging
[Keller et al. 2010] [Keller et al. 2011b]. The Javascript timer did not perform well. The
Web Audio timer performed better, but usage at high metronome speeds produced clicks.
The third prototype explored the facilities provided by the Csound PNaCl environment. We introduced the use of PNaCl for audio programming through a port the Csound
language and audio engine. As one of the simplest uses of the process control interaction
pattern, the prototype sine-wave HTML-controlled oscillator provided an opportunity to
demonstrate the advantages and limitations of this new open-source sand-boxing technology developed as part of the Chromium project. The fully functional implementation of
the Csound PNaCl environment features a mid-latency callback mechanism (ca. 10-11
ms, 512 frames at 44.1 or 48 KHz sampling rate) with uniform performance across various platforms. The Audio API design is very straightforward, but it only supports stereo
output at one of the two sampling rates just mentioned.
The technologies employed in the development the three prototypes reported in
this paper showed different types of limitations for audio programming and interaction
support. On one hand, browser-based prototyping, as introduced by the mow3s-ecolab
environment, provides a flexible way to deploy and test interaction metaphors. Standard libraries, such as the HTML5 Web Audio and Web Midi, have good potential for
wide adoption but currently present design problems that limit their usage in synchronous
activities and audio programming tasks. At this point, they are better suited for asynchronous support. We implemented a new set of technologies for audio programming
for web applications. The Csound PNaCl environment features a relatively low-latency
performance and incorporates the know-how developed over 30 years of Csound usage,
providing a path for the development of ubiquitous music ecosystems that goes beyond
the HTML5 Web Audio framework.
References
Amabile, T. (1996). Creativity in Context. Boulder, CO: Westview Press.
Boulez, P. (1986). Orientations: Collected Writings. London, UK: Faber and Faber.
Brinkmann, P. (2012). Making Musical Apps: Using the Libpd Sound Engine. O’Reilly
& Associates Incorporated.
Bryan, N. J., Herrera, J., Oh, J., and Wang, G. (2010). Momu: A mobile music toolkit. In
Proceedings of NIME 2010.
Donovan, A., Muth, R., Chen, B., and Sehr, D. (2010). PNaCl: Portable Native Client
Executables. Google White Paper.
Essl, G. (2010). Urmus: an environment for mobile instrument design and performance.
In Proceedings of ICMC 2010.
Essl, G. and Rohs, M. (2006). Mobile stk for symbian os. In Proceedings of ICMC 2006.
Vitória - ES - Brazil
78
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Flores, L., Miletto, E., Pimenta, M., Miranda, E., and Keller, D. (2010). Musical interaction patterns: Communicating computer music knowledge in a multidisciplinary
project. In Proceedings of the 28th ACM International Conference on Design of Communication, SIGDOC ’10, pages 199–206, New York, NY, USA. ACM.
Flores, L. V., Pimenta, M. S., and Keller, D. (2012). Patterns of musical interaction
with computing devices. In Proceedings of the III Ubiquitous Music Workshop (III
UbiMus), São Paulo, SP, Brazil. Ubiquitous Music Group (g-ubimus), São Paulo, SP,
Brazil: Ubiquitous Music Group.
Hornbaek, K. (2006). Current practice in measuring usability: Challenges to usability
studies and research. Internation Journal of Human-Computer Studies, 64(2):79–102.
Keller, D., Barreiro, D. L., Queiroz, M., and Pimenta, M. S. (2010). Anchoring in ubiquitous musical activities. In Proceedings of the International Computer Music Conference, pages 319–326, Ann Arbor, MI: MPublishing, University of Michigan Library.
Ann Arbor, MI: MPublishing, University of Michigan Library.
Keller, D., Ferreira da Silva, E., Pinheiro da Silva, F., Lima, M. H., Pimenta, M. S., and
Lazzarini, V. (2013). Everyday musical creativity: An exploratory study with vocal
percussion (criatividade musicalccotidiana: Um estudo exploratório com sons vocais
percussivos). In Proceedings of the National Association of Music Research and PostGraduation Congress - ANPPOM (Anais do Congresso da Associação Nacional de
Pesquisa e Pós-Graduação em Música - ANPPOM). Natal, RN: ANPPOM.
Keller, D., Flores, L. V., Pimenta, M. S., Capasso, A., and Tinajero, P. (2011a). Convergent trends toward ubiquitous music. Journal of New Music Research, 40(3):265–276.
Keller, D., Lima, M. H., Pimenta, M. S., and Queiroz, M. (2011b). Assessing musical creativity: material, procedural and contextual dimensions. In Proceedings of the National
Association of Music Research and Post-Graduation Congress - ANPPOM (Anais
do Congresso da Associação Nacional de Pesquisa e Pós-Graduação em Música ANPPOM), pages 708–714, Uberlândia, MG: ANPPOM. National Association of Music Research and Post-Graduation (ANPPOM), Uberlândia, MG: ANPPOM.
Keller, D., Otero, N., Pimenta, M. S., Lima, M. H., Johann, M., Costalonga, L., and
Lazzarini, V. (2014). Relational properties in interaction aesthetics: The ubiquitous
music turn. In Proceedings of the Electronic Visualisation and the Arts Conference
(EVA-London 2014). London: Computer Arts Society Specialist Group.
Lazzarini, V., Costello, E., Yi, S., and Fitch, J. (2014). Csound on the web. In Proceedings
of the Linux Audio Conference (LAC2014).
Lazzarini, V., Yi, S., Timoney, J., Keller, D., and Pimenta, M. S. (2012). The mobile
csound platform. In Proceedings of the International Computer Music Conference,
pages 163–167, Ljubljana. ICMA, Ann Arbor, MI: MPublishing, University of Michigan Library.
Lima, M. H., Keller, D., Pimenta, M. S., Lazzarini, V., and Miletto, E. M. (2012).
Creativity-centred design for ubiquitous musical activities: Two case studies. Journal of Music, Technology and Education, 5(2):195–222.
79
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
Miletto, E. M., Pimenta, M. S., Bouchet, F., Sansonnet, J.-P., and Keller, D. (2011). Principles for music creation by novices in networked music environments. Journal of New
Music Research, 40(3):205–216.
Pimenta, M. S., Miletto, E. M., Keller, D., and Flores, L. V. (2012). Technological support
for online communities focusing on music creation: Adopting collaboration, flexibility and multiculturality from Brazilian creativity styles, volume Cases on Web 2.0 in
Developing Countries: Studies on Implementation, Application and Use, chapter 11.
Vancouver, BC: IGI Global Press.
Pinheiro da Silva, F., Keller, D., Ferreira da Silva, E., Pimenta, M. S., and Lazzarini, V.
(2013). Everyday musical creativity: exploratory study of ubiquitous musical activities
(criatividade musical cotidiana: estudo exploratório de atividades musicais ubı́quas).
Música Hodie, 13:64–79.
Pinheiro da Silva, F., Pimenta, M. S., Lazzarini, V., and Keller, D. (2012). Time tagging
in its niche: Engagement, explorability and creative attention (a marcação temporal no
seu nicho: Engajamento, explorabilidade e atenção criativa). In Proceedings of the III
Ubiquitous Music Workshop (III UbiMus), São Paulo, SP, Brazil. Ubiquitous Music
Group (g-ubimus), São Paulo, SP: Ubiquitous Music Group.
Radanovitsck, E. A. A., Keller, D., Flores, L. V., Pimenta, M. S., and Queiroz, M. (2011).
mixdroid: Time tagging for creative activities (mixdroid: Marcação temporal para
atividades criativas). In Proceedings of the XIII Brazilian Symposium on Computer
Music (SBCM), Vitória, ES: SBC. Vitória, ES: SBC.
Roberts, C. (2011). Control: Software for End-User Interface Programming and Interactive Performance. In Proceedings of the ICMC 2011, Huddersfield, UK.
Roberts, C., Wakefield, G., and Wright, M. (2013). The Web Browser As Synthesizer
And Interface. In Proceedings of the International Conference on New Interfaces for
Musical Expression.
Schafer, R. M. (1977). The Tuning of the World. New York, NY: Knopf.
Sehr, D., Muth, R., Biffe, C., Khimenko, V., Pasko, E., Schimpf, K., Yee, B., and Chen,
B. (2010). Adapting Software Fault Isolation to Contemporary CPU Architectures. In
19th USENIX Security Symposium.
Wyse, L. and Subramanian, S. (2013). The Viability of the Web Browser as a Computer
Music Platform. Computer Music Journal, 37(4):10–23.
Yee, B., Sehr, D., Dardyk, G., Chen, J. B., Muth, R., Ormandy, T., Okasaka, S., Narula,
N., and Fullagar, N. (2009). Native Client: A Sandbox for Portable, Untrusted x86
Native Code. In 2009 IEEE Symposium on Security and Privacy.
Vitória - ES - Brazil
80
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Ubiquitous Computing meets Ubiquitous Music
Flávio L. Schiavoni1, Leandro Costalonga2
Deparamento de Computação – Universidade Federal de São João Del Rei (UFSJ)
Av. Visconde do Rio Preto, s/nº, CEP 36301-360, São João Del Rei– MG – Brazil
2
CEUNES - Universidade Federal do Espírito Santo (UFES),
Rodovia BR 101 Norte, Km. 60, Bairro Litorâneo, CEP 29932-540, São Mateus – ES
1
[email protected], [email protected]
1. Extended Abstract
Ubiquitous computing (Ubicomp) is computing everywhere, anywhere [Langheinrich
2001] anytime [Coroama et al. 2004] and also computing in anything and everything
[Greenfield 2006]. Is is also called Invisible computing [Borriello 2008], Pervasive
computing [Satyanarayanan 2001], Everyday computing [Abowd and Mynatt 2000]
among others. Despite the different names, Ubicomp is a way to see computers where
several devices typically have to work together to perform a particular task creating
smart environments [Coroama et al. 2004] or intelligent environments [Brumitt et al.
2000]. Nowadays (2014) music devices are ubiquitous in daily life. Increasingly, we are
seeing computational systems incorporating sensors such as microphones and
headphones outputs [Bellotti and Sellen 1993] and transforming several daily devices
into ubiquitous music devices. The popularity of Ubicomp with the evolution of musical
devices guided this concept to arts in a field called Ubiquitous Music (Ubimus).
Ubimus concepts and motivations, defined by Keller [Keller et al. 2009], includes to
merge sound sources and music interfaces with the environment in a ubiquitous form.
Previous research and efforts from the Ubiquitous Music Group included several
discussion involving Collective creation [Ferraz and Keller 2014], Interaction aesthetics
[Keller et al. 2014], Methodology for creativity­centered software design [Lima et al.
2012], Open issues in current musical practices [Keller et al. 2011] and other relevant
aspects of social and musical dimensions.
Beyond the musical and social discussion in Ubimus, we believe that computer
scientists can also take part of this research field once we can find a clear way to
contribute with Ubimus researches. Regarding the technological point of view, Ubimus,
like Ubicomp, is not a particular research field in Computer Science. Ubimus merge
Ubicomp research field with fields defined in Sound and Music Computing and / or
Computer Music.
Computer Music and Sound and Music Computing involves several subjects on
Computer Science field namely: Music Information Retrieval (MIR), Sonic Interaction
Design, Mobile Music Computing, Live Coding, Networked Music Performance,
Human­Computer musical interaction, New Interface for Music Expression (NIME),
Digital Audio Effects, Languages for Computer Music and more
81
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
The possibility of doing all these Computer Music activities on mobile devices
for computing (like mobile phones, tablets and netbooks) [Lazzarini and Yi 2012] can
emerge as the Ubimus research field to computer scientists.
To try to map a computer science research field in Ubimus is not an easy task. It
is possible to fall into a gap between technological possibilities and our ability to put
them to good useful. For this reason, maybe it is completely useless to develop Ubimus
hardware or software without some partnership with musicians, composers and artists,
without focusing that our main goal is not to develop technology to technology but to
music. The partnership can help a scientist to find a human need, a need of expression, a
impossible musical creation that only this kind of devices can bring to real.
Most part of Ubimus issues should not be solved in technological field but in
social fields. Nonetheless, without the support of computer guys is not easy to social
scientists to break the barriers of technology and create new concepts using technology.
Keller [Keller et al. 2010] describes Ubimus as “An Uncharted Territory”.
Music researchers are doing their work. Maybe it is time to start exploring this territory
quirks in computer science field.
References
Abowd, G. D. and Mynatt, E. D. (2000). Charting past, present, and future research in
ubiquitous computing. ACM Trans. Comput.­Hum. Interact., 7(1):29–58.
Bellotti, V. and Sellen, A. (1993). Design for privacy in ubiquitous computing
environments. In Proceedings of the Third Conference on European Conference on
Computer­Supported Cooperative Work, ECSCW’93, pages 77–92, Norwell, MA,
USA. Kluwer Academic Publishers.
Borriello, G. (2008). Invisible computing: automatically using the many bits of data we
create. Philosophical Transactions of the Royal Society A: Mathematical, Physical
and Engineering Sciences, 366(1881):3669–3683.
Brumitt, B., Krumm, J., Meyers, B., and Shafer, S. (2000). Ubiquitous computing and
the role of geometry. Personal Communications, IEEE, 7(5):41–43.
Coroama, V., Bohn, J., and Mattern, F. (2004). Living in a smart environment ­
implications for the coming ubiquitous information society. In Systems, Man and
Cybernetics, 2004 IEEE International Conference on, volume 6, pages 5633–5638
vol.6.
Ferraz, S. and Keller, D. (2014). Mdf: Proposta preliminar do modelo dentro­fora de
criação coletiva. Cadernos de Informática, 8(2):57–67.
Greenfield, A. (2006). Everyware: The Dawning Age of Ubiquitous Computing.
Peachpit Press, Berkeley, CA, USA.
Keller, D., Barreiro, D. L., Queiroz, M., and Pimenta, M. S. (2010). Anchoring in
ubiquitous musical activities. In Proceedings of the International Computer Music
Conference, pages 319–326. Ann Arbor, MI: MPublishing.
Keller, D., Barros, A., Farias, F., Nascimento, R., Pimenta, M., Flores, L., Miletto, E.,
Radanovitsck, E., Serafini, R., and Barraza, J. (2009). Música ubíqua: conceito e
Vitória - ES - Brazil
82
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
motivação. Proceedings of the ANPPOM, Curitiba, PR: Associação Nacional de
Pesquisa e Pós­Graduação em Música, pages 539–542.
Keller, D., Flores, L. V., Pimenta, M. S., Capasso, A., and Tinajero, P. (2011).
Convergent trends toward ubiquitous music. Journal of New Music Research,
40(3):265–276.
Keller, D., Otero, N., Lazzarini, V., Pimenta, M. S., do Sul, R. G., de Lima, M. H.,
Johann, M., and Costalonga, L. (2014). Relational properties in interaction aesthetics:
The ubiquitous music turn.
Langheinrich, M. (2001). Privacy by design ­ principles of privacy­aware ubiquitous
systems. In Proceedings of the 3rd International Conference on Ubiquitous
Computing, UbiComp ’01, pages 273–291, London, UK, UK. Springer­Verlag.
Lazzarini, V. and Yi, S. (2012). Csound for android. In Proceedings of Linux Audio
Conference, Stanford ­ CA ­ USA. LAD.
Lima, M. H., Keller, D., Pimenta, M. S., Lazzarini, V., and Miletto, E. M. (2012).
Creativity­centred design for ubiquitous musical activities: Two case studies. pages
195–222.
Satyanarayanan, M. (2001). Pervasive computing: vision and challenges. Personal
Communications, IEEE, 8(4):10–17.
83
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
Progressive Disclosure
Antonio D'Amato
Conservatorio Statale di Musica di Avellino IT
[email protected]
Abstract. Progressive Disclosure is a short piece where in an imaginary
landscape an unknown machine is progressively disclosed and explained in
order to revel its inner functions. The piece is an reflection on concepts of
approach modalities and comprehension of properties or qualities, that an
object possesses and its functions. Long­slow sound objects and impulsive
sounds build up the piece. These elements are merged and extensively
overlapped in order to develop an imaginary panorama with basic elements of
a music vocabulary. Synthesized and acoustically derived sounds are both
used, but the focus here is mainly on the description of a progressively closer
observation of a visionary machine.
link to the audio file (both HQ wav and mp3):
https://www.dropbox.com/sh/dgt7pkzaszimn0z/AADyeHU7l4yVDY­4moBAfHNia .
Vitória - ES - Brazil
84
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Computação Ubíqua e a interação corporal na aprendizagem
de execução rítmica
Thiago Marcondes Santos1, Denise Filippo2, Mariano Pimentel3
1,3
PPGI-Programa de Pós Graduação em Informática – Universidade Federal do Estado
do Rio de Janeiro (UniRio) Av. Pasteur 458, Térreo, Urca, 22290-240, Rio de Janeiro,
RJ, Brasil.
2
Escola Superior de Desenho Industrial (Esdi) - Universidade do Estado do Rio de
Janeiro (Uerj) R. Evaristo da Veiga, 95, Centro, 20031-040, Rio de Janeiro, RJ, Brasil
[email protected],[email protected],
[email protected]
Abstract.This article describes an investigation in the use of ubiquitous
computing in the context of a primary school classroom with the aim of
promoting the learning and experiencing of rhythmic concepts. The school
environment was transformed into a sound laboratory, where students
participated in different types of interactions. Building upon physical contacts
that can be easily executed and with which students were already familiar,
such as clapping hands, the classroom was presented as a collective musical
instrument, which lessened the technical barriers necessary to musical
execution.
1. Introdução
A aprendizagem técnica para executar os instrumentos musicais tradicionais é um
processo árduo e prolongado e, por isto, quando o estudante não tem a habilidade
necessária para produzir notas ou elementos que estão presentes no discurso musical,
ele se concentra mais em como gerar o som e menos em como os sons se combinam
para construir o discurso. Educadores como Dalcroze (1921) defendiam o uso de
diferentes formas de se aprender música, como o uso da percussão corporal e dos
gestos, para diminuir as dificuldades dos estudantes. Neste trabalho foi investigada uma
proposta para as dificuldades técnicas/instrumentais dos estudantes: a utilização de
pisos sonoros em sala de aula com o intuito de facilitar o acesso aos sons de diferentes
instrumentos gerenciados no computador.
2.O dispositivo
O dispositivo proposto é composto de um software (SoundPlant), um hardware
(computador e interface Makey Makey) e um piso de placas EVA. A produção do som
ocorre quando os 2 estudantes, cada um em pé em cada placa, encostam um no corpo do
outro. O som produzido é associado às placas através de sua configuração no software.
3. Estudo de caso
Foi realizado um estudo de caso exploratório com estudantes do 5º ano do ensino
fundamental. Os dados coletados para este estudo de caso foram questionário, grupo
focal e observação direta. A atividade foi gravada por 2 câmeras. Seis estudantes foram
convidados pelo professor de música da turma para participar da atividade: 1 aula de
música com 40 minutos de duração e com o apoio do dispositivo proposto. A aula teve
85
Vitória - ES - Brazil
V Ubimus - 2014
From Digital Arts to Ubiquitous Music
diferentes etapas: abordagem conceitual sobre pulso e suas divisões; escuta de música
para perceber diferentes sons e ritmos da bateria em uma canção sugerida pelo
professor; execução dos ritmos com palmas dos estudantes; apresentação e exploração
do ASU; execução dos ritmos anteriormente analisados e improvisações com o ASU.
Na etapa apresentação e exploração do ASU, 6 estudantes foram organizados em
3 duplas de forma a obter os 3 sons de bateria associados via SoundPlant às placas
EVA.Os estudantes também experimentaram tocar as mãos nos braços e pernas dos
companheiros para produzir sons e se divertirem nesse processo.Em seguida, na etapa
de execução dos ritmos, as mesmas 3 linhas rítmicas da canção estudada, que haviam
sido praticadas anteriormente apenas com suas palmas individuais,foram então
executadas por meio do dispositivo. Neste processo, os estudantes puderam executar e
ouvir a música estudada com uma bateria sendo tocada a 12 mãos.Em seguida, foi
solicitado aos estudantes que criassem livremente ritmos e interagissem entre si.
4. Resultados
Os dados do questionário indicaram que todos os 6 estudantes aprendem música
unicamente através da escola pública e nenhum deles tinham instrumentos musicais em
casa. Também indicaram ausência de locais, parentes ou amigos que oferecessem a
iniciação musical fora da escola. O questionário também indicou que a atividade foi
considerada “muito agradável” por todos os estudantes. Todos também responderam no
questionário que tinham interesse em fazer outra aula com o dispositivo ASU. A
atividade foi percebida como algo lúdico e de fácil acesso e utilização. Dois alunos, um
de 14 e outro de 11anos, respondiam as perguntas mais rapidamente, mas todos os 6
alunos entenderam e souberam operar o ASU com facilidade e rapidez.Dois estudantes
que tinham que executar uma linha rítmica mais rápida tiveram dificuldade de inserir
sua parte junto aos ritmos dos demais colegas. Contudo, através da colaboração e de
muita comunicação eles foram melhorando seu ritmo.
5. Conclusão
Esse trabalho apresentou uma proposta de atividade de educação musical baseada num
dispositivo que mostra como o uso de novas tecnologias computacionais possibilita o
ensino da música por meio de artefatos que não demandam o prolongado aprendizado
técnico dos instrumentos musicais tradicionais. O estudo de caso realizado mostra
indícios de que o dispositivo pode ser usado como uma alternativa de instrumento não
excludente aos instrumentos musicais tradicionais na aula de música.
Referências
Dalcroze, E.J. (1921) Rhythm music and Education.G.P.Putnam’s Sons New York.
Volpe, G., Varni, G., Mazzarino, B., Addessi, Anna.(2012)BeSound: Embodied
Reflexion for Music Education in Childhood. IDC 2012 SHORT PAPERS 12th-15th
June, Bremen, Germany
Weiser, M. (1991)The computer for the twenty-first century. Scientific American,
65(3):94-104.
Zhou,Y., Percival,G, Wang,X., Wang., Zhao,S. (2011) - MOGCLASS: Evaluation of a
Collaborative System of Mobile Devices for Classroom Music Education of Young
ChildrenSchool of Computing (SoC), National University of Singapore– CHI-2011
Vitória - ES - Brazil
86
From Digital Arts to Ubiquitous Music
V Ubimus - 2014
Balance: um estudo sobre a tradução digital do corpo em
equilíbrio
Pablo Gobira
Escola Guignard– Universidade do Estado de Minas Gerais (UEMG) – Belo Horizonte MG –
Brasil, [email protected]
Raphael Prota
Escola Guignard– Universidade do Estado de Minas Gerais (UEMG) – Belo Horizonte MG –
Brasil, [email protected]
Ítalo Travenzoli
Escola de Belas Artes – Universidade Federal de Minas Gerais (UFMG) – Belo Horizonte –
MG – Brasil, [email protected]
Palavras-chave: interface, som e imagem, arte, corpo-mente, equilíbrio
Resumo
Este trabalho apresenta o desenvolvimento de uma instalação interativa experimental que
utiliza um sensor de movimento 3D para captar o movimento corporal de uma pessoa se
equilibrando em uma fita de slackline para criar gráficos e sons a serem projetados no espaço
expositivo. Essa instalação busca mesclar os campos da música, video arte, tecnologia, jogo e
performance em um dispositivo audiovisual que utiliza a dinâmica de manutenção do
equilíbrio corporal como condição da composição gráfica e sonora. A proposta de ubiquidade
está presente na inserção de sons e imagens projetados por fontes supostamente diversas do
local onde o interator/performer se encontra para interação, acionados pela combinação dos
processos voluntários e involuntários de manutenção do equilíbrio corporal como dados para
as manifestações sonoras e visuais. A proposta da instalação é uma conexão extrema entre
corpo e máquina através da dispersão provocada pela necessidade de equilíbrio do interator,
provocando uma imersão no mundo da instalação que se pretende o mundo do experimento
sobre o corpo-mente. Afirmamos, enquanto experimento artístico, que é nesse mundo que a
produção do som está ubiquamente inserida. Especificamente sobre o áudio, os movimentos
dos membros – pés, mãos e cabeça – são traduzidos, através de um código gerativo
desenvolvido no openFrameworks e PureData, em variações na frequência e filtros de
ressonância de cinco ondas sonoras independentes criando uma atmosfera que oscila junto ao
corpo como uma representação do equilíbrio. As respostas sonoras acompanham alterações de
tamanho e subdivisões nas camadas sobrepostas de uma mandala. Pela tradução do esforço na
busca do equilíbrio em estímulos audiovisuais e pela combinação dos fluxos sensoriais do
controle postural com os estímulos audiovisuais externos, espera-se criar uma situação de
imersão multitarefa que problematize a noção de controle da interface e revele novas
percepções sobre a interação entre corpo e mente.
Links
Teste interface 1.1 – https://www.youtube.com/watch?v=xZkB9-7kNA0
Teste interface 1.2 - https://www.youtube.com/watch?v=pn27dDY3iys
87
Vitória - ES - Brazil
2014 - Vitória - ES - Brazil
Download

2014 - Vitória - ES - Brazil - dcomp