Now applications can give the user the option to switch between layouts of different historic editions, or to define a new layout of their own. This only possible if the notes under the ottavas are stored at sounding pitch, which is the same in all editions, rather than at written pitch, which can vary between editions. Notice how the ottava starts at different places in different editions, yet the notes only had to be specified once. And does not impose any penalty to playback, as playback will always have to deal with transposing instruments, as commented above.ĭatabases for music analysis or music search perhaps would benefit from using sounded pitch, but it is up to those applications to store music in the format more appropriate for the processing they will do. On the contrary, using written pitch simplifies score display. And using sounded pitch, will force to do more computations for properly displaying the score. So there is no gain encoding in sounded pitch, only in marginal cases in which changing instruments is not allowed. This is because if the user changes the instrument for playback (i,e from trumpet in Bb to flute in concert pitch or to alto sax in Eb), the application will always have to re-compute pitches. Using sounded pitch would be as encoding the English paragraphs using characters and the French paragraphs using phonetic symbols instead of characters.Īnd second, from a pragmatical point of view, dealing with transposing instruments will always require to transpose provided pitch, and it does not matter if the score is written in sounded pitch or in written pitch. An analogy: imagine a text in English with some paragraphs in French. In my opinion the meaning of 'semantic' should not be changed in some particular cases, such as for the sound of transposing instruments. But I think it is better to notate music in written pitch.įirst, from a formal point of view, I consider written pitch more in line with the objective of capturing the semantics, as for me this implies to capture how the music is written using paper and pen. I can see advantages and disadvantages with both approaches, as it has been previously commented by others. I do agree with that either choice is far, far preferable to supporting both. I think we can resolve these issues in MNX while still retaining the great usability benefit of having MNX directly and semantically represent what the musician sees and understands. Octave transpositions that are still used in concert scores are another issue, as discussed in MusicXML issue 39. Given MNX's desire to be able to make better use of a single file for score and parts, it would be good to have transposition information available in a concert score. MusicXML's representation does have some issues in concert scores. I think that design decision has worked very well and makes life much easier for anyone comparing a MusicXML encoding to a printed piece of music. The octave-shift element represents the 8va line. A piece of music that looks like a middle C with an 8va line over it will be represented in octave 5, not octave 4. Written pitch in this case is not necessarily the same as the position on a staff. MusicXML thus represents written pitch, with a transpose element to convert transposing instruments into sounding pitch. It makes it easy to check an encoding against a given written piece of music.It provides a reference to musical artifacts that are directly relevant to musicians, independent of any particular application.One of the design principles of MusicXML is to primarily represent a written piece of music in a semantic way.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |