MFA - Digital Art and Technology
San Jose State University - 2012
M.A. - Multimedia - Digital Audio
California State University East Bay, Hayward Campus - 2006
B.S. - Digital Audio Technology
Cogswell College - 2004
Imagine researchers monitoring ongoing experiments or researching system that are made up of extremely large amounts of data, that can be listening to music of their choice that is being discernibly modulated by the real time information as it flows through that given network. This research would be focused on creating and testing a generalized sound production engine that can take data streams from a complex networks and output a re-rendering of familiar music that would be chosen by the user and agree with their stylistic, cultural, and personal listening habits.
In colaboration with BBS and Center for Longevity Studies researcher Dr. Gagan Wig and Dr. Roger Malina here is a jaavascript based data examination tool that Prof. Gresham-Lancaster was instrumental in developing. The data is based on a lngitudinal study of fMRI data of participants "at rest". The fMRI data was then analyzed and converted into a set of matrices that represents a complex network of nodes and links catagorized as "systems" in congruence with biologically identified areas of the connectome (visual, auditory, etc.)
To operate the application. Simply hover your mouse over any of the nodes on the circumference. Doing so will trigger the sonification and hide all the connections except for the system on which the mouse is hovered on.
Use 'z' key to switch data source for sonification from left to right. Use 'h' key to show/hide tool palette.
It is recommended to refresh the webpage before using it if a sonification has been running for more than a few minutes.
Left file: Select the data set for the left diagram. There are 4 data sets to chose from currently.
Right file: Select the data set for the right diagram.
Sonification: Chose the type of sonification. Currently we have 5 types: Chord, DynKlank, Exformation, Écureuil, Melody.
Listen: Select which diagram you want to use for sonification. Sonification will be performed on the set chosen here, irrespective of which diagram you perform a mouse hover on.
Filters: These are you used to filter the data being displayed and sonified. The user can chose to display all the systems, only the top 10 system and the top 17 systems. In addition they can chose to have thresholds for the maximum and minimum mean values being used. It might take some time for these changes to reflect.
Computer Music Network Gresham-Lancaster, Scot (International Conference on Network Science 2013, NetSci2013, Copenhagen, Denmark, June 3, 2013) accepted February 2013
ABSTRACT: The social climate and cultural atmosphere of the San Francisco Bay Area in the late 70’s early 80’s plus the emergence of the nascent microcomputer industry made for a social network and approach that fostered the creation of a new type of collaborative electronic music ensemble with techniques that have come to be known as “Computer Music Networks” A transformation from initial heterogeneous to a more homogeneous underlying paradigm has brought with it aesthetic questions about the reason and evolution of this new genre. forthcoming - Publication
Acoustic Environments and Sonification - Scot Gresham-Lancaster & Peter Sinclair - Leonardo Music Journal Vol. 22
ABSTRACT: Sonification can allow us to connect sound and/or music via data to the environment; in another sense by ‘displaying’ data through sound it participates in creating our acoustic environment. The authors consider here the significance of certain aspects of this relationship. 2012 - Publication
Waveguide synthesis for sonification of distributed sensor arrays, Scot Gresham-Lancaster Springer-Verlag London Limited AI and Society Volume 26
ABSTRACT: A decade of work is outlined based on the use of sensors on plants that are used to change the parameters of a fixed rotation of overlapping pitches. The use of waveguide, physical modeling synthesis, allows the repeated music figures to be changed in timbral space in real time in a discernable set of ongoing parameter mapping from a large data set being generated by various biological and atmospheric sensors. 2011 - Publication
Relationships of Sonification to Music and Sound Art, Scot Gresham-Lancaster -AI & SOCIETY Journal of Knowledge, Culture and Communication© Springer-Verlag London Limited 2011 - 10.1007/s00146-011-0337-3
ABSTRACT :The definition of sonification has been reframed in recent years but remains somewhat in flux; the basic concepts and procedural flows have remained relatively unchanged. Recent definitions have focused on the objective the important uses of sonification in terms of scientific method. The full realization of the potential of the field must also include the craft and art of music composition. The author proposes examining techniques of sonification in a two-order framework: direct and procedural. The impact of new technologies and historical roots of that work argues that framing this broad topic should be in terms inclusive of scientific method and craftsmanship and art. The expressive use of sonic time-based data flows needs to be refined and expanded. The unexamined territory of how a broad-based population of listeners on a subjective, as well as objective level needs, have to be included in this new field. 2011 - Publication
Historical perspectives and ongoing developments in telematic performances Scot Gresham-Lancaster J. Acoust. Soc. Am. Volume 124, Issue 4, pp. 2490-2490
ABSTRACT: This paper presents historical perspective on development and new technology applications for performances in which the author collaborated with several other dancers, musicians, and media artists to present synchronized colocated concerts at two or more sites. This work grew out of the author’s participation in the landmark computer music ensemble, “The HUB.” Each of the various performances was made possible by an evolving array of videoconferencing hardware and software. 2008 - Publication
CELLPHONIA: In The News / Work-In-Progress Steve Bull, Scot Gresham-Lancaster, Tim Perkis - Third International Workshop in Mobile Music Technology (University of Sussex, Brighton, UK)
ABSTRACT : An open source cell phone karaoke opera with a mixed final performance delivered to the participant as a podcast and online as a web based mp3. The ever-changing current state of the opera will be continuously available as an online stream-cast. 2006 - Publication
Flying Blind: Network and feedback based systems in real time interactive music performances Scot Gresham-Lancaster Proceedings of the “Beyond Noise” Conference UCSB 2002 - Publication
Mixing in the Round Scot Gresham-Lancaster Desktop Music Production Guide - Primedia Publications 2001 - Publication
The Aesthetics and History of the Hub: The Effects of Changing Technology on Network Computer Music Leonardo Music Journal, Vol. 8 (1998), pp. 39-44, doi:10.2307/1513398
ABSTRACT: The author, a member of the group the Hub, discusses the aesthetic and performance history of the group and related San Francisco Bay Area live interactive music performance practices. The performance practice of the Hub-interactive computer network music-is discussed. Particular focus is placed on the impact of changes in technology. Future applications and directions of this musical approach are discussed. 1998 - Publication
Experiences in Digital Terrain: Using Digital Elevation Models for Music and Interactive Multimedia Scot Gresham-Lancaster & William Thibault Leonardo Music Journal 7 (pp. 11-15)
ABSTRACT: The authors have investigated the use of data that describes actual geographical terrain for purposes of deriving music and animation in interactive multimedia applications. Their approach has been to create virtual "travelers"-automatic processes that move through the modeled terrain and respond to what they find on the basis of programmed rules of behavior. The authors discuss several implementations of the concept and outline directions they are exploring for further development. 1997 - Publication
University of Texas at Dallas [2012–2018]
San Jose State University [2008–2018]
Arts and Humanities
California State University East Bay [2002–2004]
Expression College for New Media
Diablo Valley College
California State University Hayward [1988–2004]
2012–2015 Working with artist Mariateresa Satori and physicist Bruno Giorgini to establish a technique to turn data spotaneously generated by a given cities data flow into sound. This includes vehicle and pedestrian traffic flows as well as the interaction of crowds with architeture and urban planning. "This is based on a synthetic survey of the main ideas which are in our aim at the origin of the “Physics and the City” Symposium (Bologna-Italy, 15-17 December 2005). These ideas are developed from different point of view in the collected papers, also with different scientific languages, from the sociological or planning to the mathematical or physical ones. Very schematically we think that the present proceedings could prefigure the emergence of a new “science of the city”, really multidisciplinary and with possible spin-off on every single science concerned. Moreover with a large range of possible applications to the actual social life, in order to contribute to a better quality of life. Surely we are at the beginning and the topology of this “science of the city” it is not still well defined."
2011–2011 Watch a video regarding the residency related to this research
Sonification: the use of sound to objectively represent time series data flows becomes confused with axiomatic representations of all sound as music after the work of composer John Cage and others. Computers have given sound artists and sonification researchers access to automated data flows that fit directly into this aesthetic. However, this is not a functional use of information for practical use. The speaker will address his transition from a composer that used time flow information as a compositional determinate to a researcher that is investigating new techniques for providing real and functional information as music. Not just sound but music mapped to various styles that a broader population of scientists would enjoy listening to over long periods of time.
2015–2018 INSTRUMENT: One Antarctic Night, an interactive artwork created from 287,800 images of the universe captured recently by the CSTAR robotic telescope in Antarctica engages participants in creating, performing and sharing aesthetic multi-modal remixes of astrophysics data as new visuals and sound. As they interact with the touch-sensitive remix stations in the gallery, on their mobile devices or online, participants remix in “noise” that has been removed by the scientific process to “scratch” their own personal versions of the universe in a way that isaccessible, playful and yet meaningful for people of all ages.
It is the result of an ongoing art-science collaboration blending visual, new media and sound art, astrophysics, experimental sonification and visualization of scientific data, online and mobile participatory media and novel emerging technologies in immersive and interactive display systems and spatialized and holographic sound environments.
Motivation for creation of INSTRUMENT: One Antarctic Night arose from our observation that in an era of rapid digitization of nature and culture we face a crisis of representation. With the advent of digital technologies, the internet, and mobile media, data is no longer constrained to scientific or analytical domains — it has emerged as a cultural raw material with myriad expressive potentials. In this context remix in the form of mashups, animated gifts and more has emerged as a form of 21st century digital literacy that touches the lives of people of all ages and backgrounds. On a daily basis we create, exchange, and consume data from sources spanning our fitness wearables, databases of consumer transactions, to environmental sensing. The richness of the data is such that we can generate an enormous amount of interpretations to address human concerns spanning the personal to the global. Yet what we see and know, and how we see and know, is circumscribed by our choices regarding creating, storing, analyzing and representing the data. These unspoken narratives make “big data” small by encoding agreed upon assumptions about what we think we will find, what we think we can see and know. Art-science is emerging as a method for creating new ways of seeing and new ways of knowing. In creating this artwork, we hope to engage a broad public, the contemporary arts community and the emerging global community of art-science practitioners in exploring the ways in which data as a cultural raw material transforms the way we experience our selves, our world, and the universe: the ways we create and communicate meaning in science and culture.
Scot Gresham-Lancaster is a composer, performer, instrument builder and educator. Currently teaching Sound Design at UT Dallas, his recent work at IMéRA is in 2nd order sonification of data sets. As a member of the HUB, he is an early pioneers of "computer network music" and cellphone operas. He has created a series of "co-located" international Internet performances and worked developing audio for several games and interactive products. He is an expert in educational technology and techniques. CV
He was a student of Philip Ianni, Roy Harris, Darius Milhaud, John Chowning, Robert Ashley, Terry Riley, "Blue" Gene Tyrany, David Cope and Jack Jarrett among others. He has been a composer in residence at Mills College. STEIM and the Djerassi Artist Residency Program. He has toured and recorded with the HUB, Alvin Curran, ROVA saxophone quartet and NYX. He has performed the music of Alvin Curran, Pauline Oliveros, John Zorn, and John Cage, under their direction, and worked as a technical assistant to Lou Harrison, Iannis Xenakis, David Tudor and others.